summaryrefslogtreecommitdiffstats
path: root/docs/manual/misc/perf-tuning.xml
blob: 9c2c372e35c0b717f26e03ea3d19b830120cf57f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE manualpage SYSTEM "../style/manualpage.dtd">
<?xml-stylesheet type="text/xsl" href="../style/manual.en.xsl"?>

<manualpage>
  <relativepath href=".." />

  <title>Apache Performance Notes</title>

  <summary>

    <note type="warning"><strong>Warning:</strong>
    This document has not been fully updated
    to take into account changes made in the 2.0 version of the
    Apache HTTP Server. Some of the information may still be
    relevant, but please use it with care.</note>

    <p>Orignally written by Dean Gaudet.</p>

    <p>Apache 2.0 is a general-purpose webserver, designed to
    provide a balance of flexibility, portability, and performance.
    Although it has not been designed specifically to set benchmark
    records, Apache 2.0 is capable of high performance in many
    real-world situations.</p>

    <p>Compared to Apache 1.3, release 2.0 contains many additional
    optimizations to increase throughput and scalability. Most of
    these improvements are enabled by default. However, there are
    compile-time and run-time configuration choices that can
    significantly affect performance. This document describes the
    options that a server administrator can configure to tune the
    performance of an Apache 2.0 installation. Some of these
    configuration options enable the httpd to better take advantage
    of the capabilities of the hardware and OS, while others allow
    the administrator to trade functionality for speed.</p>

  </summary>

  <section id="hardware">

    <title>Hardware and Operating System Issues</title>

    <p>The single biggest hardware issue affecting webserver
    performance is RAM. A webserver should never ever have to swap,
    swapping increases the latency of each request beyond a point
    that users consider "fast enough". This causes users to hit
    stop and reload, further increasing the load. You can, and
    should, control the <directive module="mpm_common"
    >MaxClients</directive> setting so that your server
    does not spawn so many children it starts swapping.</p>

    <p>Beyond that the rest is mundane: get a fast enough CPU, a
    fast enough network card, and fast enough disks, where "fast
    enough" is something that needs to be determined by
    experimentation.</p>

    <p>Operating system choice is largely a matter of local
    concerns. But some guidelines that have proven generally
    useful are:</p>

    <ul>
      <li>
        <p>Run the latest stable release and patchlevel of the
        operating system that you choose. Many OS suppliers have
        introduced significant performance improvements to their
        TCP stacks and thread libraries in recent years.</p>
      </li>

      <li>
        <p>If your OS supports a <code>sendfile(2)</code> system
        call, make sure you install the release and/or patches
        needed to enable it. (With Linux, for example, this means
        using Linux 2.4 or later. For early releases of Solaris 8,
        you may need to apply a patch.) On systems where it is
        available, <code>sendfile</code> enables Apache 2 to deliver
        static content faster and with lower CPU utilization.</p>
      </li>
    </ul>

  </section>

  <section id="runtime">

    <title>Run-Time Configuration Issues</title>

    <related>
      <modulelist>
        <module>mod_dir</module>
        <module>mpm_common</module>
        <module>mod_status</module>
      </modulelist>
      <directivelist>
        <directive module="core">AllowOverride</directive>
        <directive module="mod_dir">DirectoryIndex</directive>
        <directive module="core">HostnameLookups</directive>
        <directive module="core">EnableMMAP</directive>
        <directive module="core">EnableSendfile</directive>
        <directive module="core">KeepAliveTimeout</directive>
        <directive module="prefork">MaxSpareServers</directive>
        <directive module="prefork">MinSpareServers</directive>
        <directive module="core">Options</directive>
        <directive module="mpm_common">StartServers</directive>
      </directivelist>
    </related>

    <section>

      <title><code>HostnameLookups</code></title>

      <p>Prior to Apache 1.3, <directive module="core"
      >HostnameLookups</directive> defaulted to <code>On</code>.
      This adds latency to every request because it requires a
      DNS lookup to complete before the request is finished. In
      Apache 1.3 this setting defaults to <code>Off</code>.
      However (1.3 or later), if you use any <code>Allow from domain</code>
      or <code>Deny from domain</code> directives then you will pay for
      a double reverse DNS lookup (a reverse, followed by a forward
      to make sure that the reverse is not being spoofed). So for the
      highest performance avoid using these directives (it's fine to
      use IP addresses rather than domain names).</p>

      <p>Note that it's possible to scope the directives, such as
      within a <code>&lt;Location /server-status&gt;</code> section.
      In this case the DNS lookups are only performed on requests
      matching the criteria. Here's an example which disables lookups
      except for <code>.html</code> and <code>.cgi</code> files:</p>

<example><pre>
HostnameLookups off
&lt;Files ~ "\.(html|cgi)$"&gt;
    HostnameLookups on
&lt;/Files&gt;
</pre></example>

      <p>But even still, if you just need DNS names in some CGIs you
      could consider doing the <code>gethostbyname</code> call in the
      specific CGIs that need it.</p>

      <p>Similarly, if you need to have hostname information in your
      server logs in order to generate reports of this information,
      you can postprocess your log file with <a
      href="../programs/logresolve.html"><code>logresolve</code></a>,
      so that these lookups can be done without making the client wait.
      It is recommended that you do this postprocessing, and any other
      statistical analysis of the log file, somewhere other than your
      production web server machine, in order that this activity does
      not adversely affect server performance.</p>

    </section>

    <section>

      <title><code>FollowSymLinks</code> and <code>SymLinksIfOwnerMatch</code></title>

      <p>Wherever in your URL-space you do not have an <code>Options
      FollowSymLinks</code>, or you do have an <code>Options
      SymLinksIfOwnerMatch</code> Apache will have to issue extra
      system calls to check up on symlinks. One extra call per
      filename component. For example, if you had:</p>

<example><pre>
DocumentRoot /www/htdocs
&lt;Directory /&gt;
    Options SymLinksIfOwnerMatch
&lt;/Directory&gt;
</pre></example>

      <p>and a request is made for the URI <code>/index.html</code>.
      Then Apache will perform <code>lstat(2)</code> on
      <code>/www</code>, <code>/www/htdocs</code>, and
      <code>/www/htdocs/index.html</code>. The results of these
      <code>lstats</code> are never cached, so they will occur on
      every single request. If you really desire the symlinks
      security checking you can do something like this:</p>

<example><pre>
DocumentRoot /www/htdocs
&lt;Directory /&gt;
    Options FollowSymLinks
&lt;/Directory&gt;
&lt;Directory /www/htdocs&gt;
    Options -FollowSymLinks +SymLinksIfOwnerMatch
&lt;/Directory&gt;
</pre></example>

      <p>This at least avoids the extra checks for the
      <directive module="core">DocumentRoot</directive> path.
      Note that you'll need to add similar sections if you
      have any <directive module="mod_alias">Alias</directive> or
      <directive module="mod_rewrite">RewriteRule</directive> paths
      outside of your document root. For highest performance,
      and no symlink protection, set <code>FollowSymLinks</code>
      everywhere, and never set <code>SymLinksIfOwnerMatch</code>.</p>

    </section>

    <section>

      <title><code>AllowOverride</code></title>

      <p>Wherever in your URL-space you allow overrides (typically
      <code>.htaccess</code> files) Apache will attempt to open
      <code>.htaccess</code> for each filename component. For
      example,</p>

<example><pre>
DocumentRoot /www/htdocs
&lt;Directory /&gt;
    AllowOverride all
&lt;/Directory&gt;
</pre></example>

      <p>and a request is made for the URI <code>/index.html</code>.
      Then Apache will attempt to open <code>/.htaccess</code>,
      <code>/www/.htaccess</code>, and
      <code>/www/htdocs/.htaccess</code>. The solutions are similar
      to the previous case of <code>Options FollowSymLinks</code>.
      For highest performance use <code>AllowOverride None</code>
      everywhere in your filesystem.</p>

    </section>

    <section>

      <title>Negotiation</title>

      <p>If at all possible, avoid content-negotiation if you're
      really interested in every last ounce of performance. In
      practice the benefits of negotiation outweigh the performance
      penalties. There's one case where you can speed up the server.
      Instead of using a wildcard such as:</p>

<example><pre>
DirectoryIndex index
</pre></example>

    <p>Use a complete list of options:</p>

<example><pre>
DirectoryIndex index.cgi index.pl index.shtml index.html
</pre></example>

      <p>where you list the most common choice first.</p>

      <p>Also note that explicitly creating a <code>type-map</code>
      file provides better performance than using
      <code>MultiViews</code>, as the necessary information can be
      determined by reading this single file, rather than having to
      scan the directory for files.</p>

    </section>

    <section>

      <title>Memory-mapping</title>

      <p>In situations where Apache 2.0 needs to look at the contents
      of a file being delivered--for example, when doing server-side-include
      processing--it normally memory-maps the file if the OS supports
      some form of <code>mmap(2)</code>.</p>

      <p>On some platforms, this memory-mapping improves performance.
      However, there are cases where memory-mapping can hurt the performance
      or even the stability of the httpd:</p>

      <ul>
        <li>
          <p>On some operating systems, <code>mmap</code> does not scale
          as well as <code>read(2)</code> when the number of CPUs increases.
          On multiprocessor Solaris servers, for example, Apache 2.0 sometimes
          delivers server-parsed files faster when <code>mmap</code> is disabled.</p>
        </li>

        <li>
          <p>If you memory-map a file located on an NFS-mounted filesystem
          and a process on another NFS client machine deletes or truncates
          the file, your process may get a bus error the next time it tries
          to access the mapped file content.</p>
        </li>
      </ul>

      <p>For installations where either of these factors applies, you
      should use <code>EnableMMAP off</code> to disable the memory-mapping
      of delivered files. (Note: This directive can be overridden on
      a per-directory basis.)</p>

    </section>

    <section>

      <title>Sendfile</title>

      <p>In situations where Apache 2.0 can ignore the contents of the file
      to be delivered -- for example, when serving static file content -- 
      it normally uses the kernel sendfile support the file if the OS 
      supports the <code>sendfile(2)</code> operation.</p>

      <p>On most platforms, using sendfile improves performance by eliminating
      separate read and send mechanics.  However, there are cases where using
      sendfile can harm the stability of the httpd:</p>

      <ul>
        <li>
          <p>Some platforms may have broken sendfile support that the build
          system did not detect, especially if the binaries were built on
          another box and moved to such a machine with broken sendfile support.</p>
        </li>
        <li>
          <p>With an NFS-mounted files, the kernel may be unable 
          to reliably serve the network file through it's own cache.</p>
        </li>
      </ul>

      <p>For installations where either of these factors applies, you
      should use <code>EnableSendfile off</code> to disable sendfile
      delivery of file contents. (Note: This directive can be overridden 
      on a per-directory basis.)</p>

    </section>

    <section>

      <title>Process Creation</title>

      <p>Prior to Apache 1.3 the <directive module="prefork"
      >MinSpareServers</directive>, <directive module="prefork"
      >MaxSpareServers</directive>, and <directive module="mpm_common"
      >StartServers</directive> settings all had drastic effects on
      benchmark results. In particular, Apache required a "ramp-up"
      period in order to reach a number of children sufficient to serve
      the load being applied. After the initial spawning of
      <directive module="mpm_common">StartServers</directive> children,
      only one child per second would be created to satisfy the
      <directive module="prefork">MinSpareServers</directive>
      setting. So a server being accessed by 100 simultaneous
      clients, using the default <directive module="mpm_common"
      >StartServers</directive> of <code>5</code> would take on
      the order 95 seconds to spawn enough children to handle
      the load. This works fine in practice on real-life servers,
      because they aren't restarted frequently. But does really
      poorly on benchmarks which might only run for ten minutes.</p>

      <p>The one-per-second rule was implemented in an effort to
      avoid swamping the machine with the startup of new children. If
      the machine is busy spawning children it can't service
      requests. But it has such a drastic effect on the perceived
      performance of Apache that it had to be replaced. As of Apache
      1.3, the code will relax the one-per-second rule. It will spawn
      one, wait a second, then spawn two, wait a second, then spawn
      four, and it will continue exponentially until it is spawning
      32 children per second. It will stop whenever it satisfies the
      <directive module="prefork">MinSpareServers</directive>
      setting.</p>

      <p>This appears to be responsive enough that it's almost
      unnecessary to twiddle the <directive module="prefork"
      >MinSpareServers</directive>, <directive module="prefork"
      >MaxSpareServers</directive> and <directive module="mpm_common"
      >StartServers</directive> knobs. When more than 4 children are
      spawned per second, a message will be emitted to the
      <directive module="core">ErrorLog</directive>. If you
      see a lot of these errors then consider tuning these settings.
      Use the <module>mod_status</module> output as a guide.</p>

    <p>Related to process creation is process death induced by the
    <directive module="mpm_common">MaxRequestsPerChild</directive>
    setting. By default this is <code>0</code>,
    which means that there is no limit to the number of requests
    handled per child. If your configuration currently has this set
    to some very low number, such as <code>30</code>, you may want to bump this
    up significantly. If you are running SunOS or an old version of
    Solaris, limit this to <code>10000</code> or so because of memory leaks.</p>

    <p>When keep-alives are in use, children will be kept busy
    doing nothing waiting for more requests on the already open
    connection. The default <directive module="core"
    >KeepAliveTimeout</directive> of <code>15</code>
    seconds attempts to minimize this effect. The tradeoff here is
    between network bandwidth and server resources. In no event
    should you raise this above about <code>60</code> seconds, as <a
    href="http://www.research.digital.com/wrl/techreports/abstracts/95.4.html">
    most of the benefits are lost</a>.</p>

    </section>

  </section>

  <section id="compiletime">

    <title>Compile-Time Configuration Issues</title>

    <section>

      <title>mod_status and ExtendedStatus On</title>

      <p>If you include <module>mod_status</module> and you also set
      <code>ExtendedStatus On</code> when building and running
      Apache, then on every request Apache will perform two calls to
      <code>gettimeofday(2)</code> (or <code>times(2)</code>
      depending on your operating system), and (pre-1.3) several
      extra calls to <code>time(2)</code>. This is all done so that
      the status report contains timing indications. For highest
      performance, set <code>ExtendedStatus off</code> (which is the
      default).</p>

    </section>

    <section>

      <title>accept Serialization - multiple sockets</title>

      <p>This discusses a shortcoming in the Unix socket API. Suppose
      your web server uses multiple <directive module="mpm_common"
      >Listen</directive> statements to listen on either multiple
      ports or multiple addresses. In order to test each socket
      to see if a connection is ready Apache uses
      <code>select(2)</code>. <code>select(2)</code> indicates that a
      socket has <em>zero</em> or <em>at least one</em> connection
      waiting on it. Apache's model includes multiple children, and
      all the idle ones test for new connections at the same time. A
      naive implementation looks something like this (these examples
      do not match the code, they're contrived for pedagogical
      purposes):</p>

<example><pre>
    for (;;) {
    for (;;) {
        fd_set accept_fds;

        FD_ZERO (&amp;accept_fds);
        for (i = first_socket; i &lt;= last_socket; ++i) {
        FD_SET (i, &amp;accept_fds);
        }
        rc = select (last_socket+1, &amp;accept_fds, NULL, NULL, NULL);
        if (rc &lt; 1) continue;
        new_connection = -1;
        for (i = first_socket; i &lt;= last_socket; ++i) {
        if (FD_ISSET (i, &amp;accept_fds)) {
            new_connection = accept (i, NULL, NULL);
            if (new_connection != -1) break;
        }
        }
        if (new_connection != -1) break;
    }
    process the new_connection;
    }
</pre></example>

      <p>But this naive implementation has a serious starvation problem.
      Recall that multiple children execute this loop at the same
      time, and so multiple children will block at
      <code>select</code> when they are in between requests. All
      those blocked children will awaken and return from
      <code>select</code> when a single request appears on any socket
      (the number of children which awaken varies depending on the
      operating system and timing issues). They will all then fall
      down into the loop and try to <code>accept</code> the
      connection. But only one will succeed (assuming there's still
      only one connection ready), the rest will be <em>blocked</em>
      in <code>accept</code>. This effectively locks those children
      into serving requests from that one socket and no other
      sockets, and they'll be stuck there until enough new requests
      appear on that socket to wake them all up. This starvation
      problem was first documented in <a
      href="http://bugs.apache.org/index/full/467">PR#467</a>. There
      are at least two solutions.</p>

      <p>One solution is to make the sockets non-blocking. In this
      case the <code>accept</code> won't block the children, and they
      will be allowed to continue immediately. But this wastes CPU
      time. Suppose you have ten idle children in
      <code>select</code>, and one connection arrives. Then nine of
      those children will wake up, try to <code>accept</code> the
      connection, fail, and loop back into <code>select</code>,
      accomplishing nothing. Meanwhile none of those children are
      servicing requests that occurred on other sockets until they
      get back up to the <code>select</code> again. Overall this
      solution does not seem very fruitful unless you have as many
      idle CPUs (in a multiprocessor box) as you have idle children,
      not a very likely situation.</p>

      <p>Another solution, the one used by Apache, is to serialize
      entry into the inner loop. The loop looks like this
      (differences highlighted):</p>

<example><pre>
    for (;;) {
    <strong>accept_mutex_on ();</strong>
    for (;;) {
        fd_set accept_fds;

        FD_ZERO (&amp;accept_fds);
        for (i = first_socket; i &lt;= last_socket; ++i) {
        FD_SET (i, &amp;accept_fds);
        }
        rc = select (last_socket+1, &amp;accept_fds, NULL, NULL, NULL);
        if (rc &lt; 1) continue;
        new_connection = -1;
        for (i = first_socket; i &lt;= last_socket; ++i) {
        if (FD_ISSET (i, &amp;accept_fds)) {
            new_connection = accept (i, NULL, NULL);
            if (new_connection != -1) break;
        }
        }
        if (new_connection != -1) break;
    }
    <strong>accept_mutex_off ();</strong>
    process the new_connection;
    }
</pre></example>

      <p><a id="serialize" name="serialize">The functions</a>
      <code>accept_mutex_on</code> and <code>accept_mutex_off</code>
      implement a mutual exclusion semaphore. Only one child can have
      the mutex at any time. There are several choices for
      implementing these mutexes. The choice is defined in
      <code>src/conf.h</code> (pre-1.3) or
      <code>src/include/ap_config.h</code> (1.3 or later). Some
      architectures do not have any locking choice made, on these
      architectures it is unsafe to use multiple
      <directive module="mpm_common">Listen</directive>
      directives.</p>

      <dl>
        <dt><code>USE_FLOCK_SERIALIZED_ACCEPT</code></dt>

        <dd>
          <p>This method uses the <code>flock(2)</code> system call to
          lock a lock file (located by the <directive module="mpm_common"
          >LockFile</directive> directive).</p>
        </dd>

        <dt><code>USE_FCNTL_SERIALIZED_ACCEPT</code></dt>

        <dd>
          <p>This method uses the <code>fcntl(2)</code> system call to
          lock a lock file (located by the <directive module="mpm_common"
          >LockFile</directive> directive).</p>
        </dd>

        <dt><code>USE_SYSVSEM_SERIALIZED_ACCEPT</code></dt>

        <dd>
          <p>(1.3 or later) This method uses SysV-style semaphores to
          implement the mutex. Unfortunately SysV-style semaphores have
          some bad side-effects. One is that it's possible Apache will
          die without cleaning up the semaphore (see the
          <code>ipcs(8)</code> man page). The other is that the
          semaphore API allows for a denial of service attack by any
          CGIs running under the same uid as the webserver
          (<em>i.e.</em>, all CGIs, unless you use something like
          <code>suexec</code> or <code>cgiwrapper</code>). For these
          reasons this method is not used on any architecture except
          IRIX (where the previous two are prohibitively expensive
          on most IRIX boxes).</p>
        </dd>

        <dt><code>USE_USLOCK_SERIALIZED_ACCEPT</code></dt>

        <dd>
          <p>(1.3 or later) This method is only available on IRIX, and
          uses <code>usconfig(2)</code> to create a mutex. While this
          method avoids the hassles of SysV-style semaphores, it is not
          the default for IRIX. This is because on single processor
          IRIX boxes (5.3 or 6.2) the uslock code is two orders of
          magnitude slower than the SysV-semaphore code. On
          multi-processor IRIX boxes the uslock code is an order of
          magnitude faster than the SysV-semaphore code. Kind of a
          messed up situation. So if you're using a multiprocessor IRIX
          box then you should rebuild your webserver with
          <code>-DUSE_USLOCK_SERIALIZED_ACCEPT</code> on the
          <code>EXTRA_CFLAGS</code>.</p>
        </dd>

        <dt><code>USE_PTHREAD_SERIALIZED_ACCEPT</code></dt>

        <dd>
          <p>(1.3 or later) This method uses POSIX mutexes and should
          work on any architecture implementing the full POSIX threads
          specification, however appears to only work on Solaris (2.5
          or later), and even then only in certain configurations. If
          you experiment with this you should watch out for your server
          hanging and not responding. Static content only servers may
          work just fine.</p>
        </dd>
      </dl>

      <p>If your system has another method of serialization which
      isn't in the above list then it may be worthwhile adding code
      for it (and submitting a patch back to Apache).</p>

      <p>Another solution that has been considered but never
      implemented is to partially serialize the loop -- that is, let
      in a certain number of processes. This would only be of
      interest on multiprocessor boxes where it's possible multiple
      children could run simultaneously, and the serialization
      actually doesn't take advantage of the full bandwidth. This is
      a possible area of future investigation, but priority remains
      low because highly parallel web servers are not the norm.</p>

      <p>Ideally you should run servers without multiple
      <directive module="mpm_common">Listen</directive>
      statements if you want the highest performance.
      But read on.</p>

    </section>

    <section>

      <title>accept Serialization - single socket</title>

      <p>The above is fine and dandy for multiple socket servers, but
      what about single socket servers? In theory they shouldn't
      experience any of these same problems because all children can
      just block in <code>accept(2)</code> until a connection
      arrives, and no starvation results. In practice this hides
      almost the same "spinning" behaviour discussed above in the
      non-blocking solution. The way that most TCP stacks are
      implemented, the kernel actually wakes up all processes blocked
      in <code>accept</code> when a single connection arrives. One of
      those processes gets the connection and returns to user-space,
      the rest spin in the kernel and go back to sleep when they
      discover there's no connection for them. This spinning is
      hidden from the user-land code, but it's there nonetheless.
      This can result in the same load-spiking wasteful behaviour
      that a non-blocking solution to the multiple sockets case
      can.</p>

      <p>For this reason we have found that many architectures behave
      more "nicely" if we serialize even the single socket case. So
      this is actually the default in almost all cases. Crude
      experiments under Linux (2.0.30 on a dual Pentium pro 166
      w/128Mb RAM) have shown that the serialization of the single
      socket case causes less than a 3% decrease in requests per
      second over unserialized single-socket. But unserialized
      single-socket showed an extra 100ms latency on each request.
      This latency is probably a wash on long haul lines, and only an
      issue on LANs. If you want to override the single socket
      serialization you can define
      <code>SINGLE_LISTEN_UNSERIALIZED_ACCEPT</code> and then
      single-socket servers will not serialize at all.</p>

    </section>

    <section>

      <title>Lingering Close</title>

      <p>As discussed in <a
      href="http://www.ics.uci.edu/pub/ietf/http/draft-ietf-http-connection-00.txt">
      draft-ietf-http-connection-00.txt</a> section 8, in order for
      an HTTP server to <strong>reliably</strong> implement the
      protocol it needs to shutdown each direction of the
      communication independently (recall that a TCP connection is
      bi-directional, each half is independent of the other). This
      fact is often overlooked by other servers, but is correctly
      implemented in Apache as of 1.2.</p>

      <p>When this feature was added to Apache it caused a flurry of
      problems on various versions of Unix because of a
      shortsightedness. The TCP specification does not state that the
      <code>FIN_WAIT_2</code> state has a timeout, but it doesn't prohibit it.
      On systems without the timeout, Apache 1.2 induces many sockets
      stuck forever in the <code>FIN_WAIT_2</code> state. In many cases this
      can be avoided by simply upgrading to the latest TCP/IP patches
      supplied by the vendor. In cases where the vendor has never
      released patches (<em>i.e.</em>, SunOS4 -- although folks with
      a source license can patch it themselves) we have decided to
      disable this feature.</p>

      <p>There are two ways of accomplishing this. One is the socket
      option <code>SO_LINGER</code>. But as fate would have it, this
      has never been implemented properly in most TCP/IP stacks. Even
      on those stacks with a proper implementation (<em>i.e.</em>,
      Linux 2.0.31) this method proves to be more expensive (cputime)
      than the next solution.</p>

      <p>For the most part, Apache implements this in a function
      called <code>lingering_close</code> (in
      <code>http_main.c</code>). The function looks roughly like
      this:</p>

<example><pre>
    void lingering_close (int s)
    {
    char junk_buffer[2048];

    /* shutdown the sending side */
    shutdown (s, 1);

    signal (SIGALRM, lingering_death);
    alarm (30);

    for (;;) {
        select (s for reading, 2 second timeout);
        if (error) break;
        if (s is ready for reading) {
        if (read (s, junk_buffer, sizeof (junk_buffer)) &lt;= 0) {
            break;
        }
        /* just toss away whatever is here */
        }
    }

    close (s);
    }
</pre></example>

      <p>This naturally adds some expense at the end of a connection,
      but it is required for a reliable implementation. As HTTP/1.1
      becomes more prevalent, and all connections are persistent,
      this expense will be amortized over more requests. If you want
      to play with fire and disable this feature you can define
      <code>NO_LINGCLOSE</code>, but this is not recommended at all.
      In particular, as HTTP/1.1 pipelined persistent connections
      come into use <code>lingering_close</code> is an absolute
      necessity (and <a
      href="http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html">
      pipelined connections are faster</a>, so you want to support
      them).</p>

    </section>

    <section>

      <title>Scoreboard File</title>

      <p>Apache's parent and children communicate with each other
      through something called the scoreboard. Ideally this should be
      implemented in shared memory. For those operating systems that
      we either have access to, or have been given detailed ports
      for, it typically is implemented using shared memory. The rest
      default to using an on-disk file. The on-disk file is not only
      slow, but it is unreliable (and less featured). Peruse the
      <code>src/main/conf.h</code> file for your architecture and
      look for either <code>USE_MMAP_SCOREBOARD</code> or
      <code>USE_SHMGET_SCOREBOARD</code>. Defining one of those two
      (as well as their companions <code>HAVE_MMAP</code> and
      <code>HAVE_SHMGET</code> respectively) enables the supplied
      shared memory code. If your system has another type of shared
      memory, edit the file <code>src/main/http_main.c</code> and add
      the hooks necessary to use it in Apache. (Send us back a patch
      too please.)</p>

      <note>Historical note: The Linux port of Apache didn't start to
      use shared memory until version 1.2 of Apache. This oversight
      resulted in really poor and unreliable behaviour of earlier
      versions of Apache on Linux.</note>

    </section>

    <section>

      <title><code>DYNAMIC_MODULE_LIMIT</code></title>

      <p>If you have no intention of using dynamically loaded modules
      (you probably don't if you're reading this and tuning your
      server for every last ounce of performance) then you should add
      <code>-DDYNAMIC_MODULE_LIMIT=0</code> when building your
      server. This will save RAM that's allocated only for supporting
      dynamically loaded modules.</p>

    </section>

  </section>

  <section id="trace">

    <title>Appendix: Detailed Analysis of a Trace</title>

    <p>Here is a system call trace of Apache 2.0.38 with the worker MPM
    on Solaris 8. This trace was collected using:</p>

    <example>
      truss -l -p <em>httpd_child_pid</em>.
    </example>

    <p>The <code>-l</code> option tells truss to log the ID of the
    LWP (lightweight process--Solaris's form of kernel-level thread)
    that invokes each system call.</p>

    <p>Other systems may have different system call tracing utilities
    such as <code>strace</code>, <code>ktrace</code>, or <code>par</code>.
    They all produce similar output.</p>

    <p>In this trace, a client has requested a 10KB static file
    from the httpd. Traces of non-static requests or requests
    with content negotiation look wildly different (and quite ugly
    in some cases).</p>

<example><pre>
/67:    accept(3, 0x00200BEC, 0x00200C0C, 1) (sleeping...)
/67:    accept(3, 0x00200BEC, 0x00200C0C, 1)            = 9
</pre></example>

    <p>In this trace, the listener thread is running within LWP #67.</p>

    <note>Note the lack of <code>accept(2)</code> serialization. On this
    particular platform, the worker MPM uses an unserialized accept by
    default unless it is listening on multiple ports.</note>

<example><pre>
/65:    lwp_park(0x00000000, 0)                         = 0
/67:    lwp_unpark(65, 1)                               = 0
</pre></example>

    <p>Upon accepting the connection, the listener thread wakes up
    a worker thread to do the request processing. In this trace,
    the worker thread that handles the request is mapped to LWP #65.</p>

<example><pre>
/65:    getsockname(9, 0x00200BA4, 0x00200BC4, 1)       = 0
</pre></example>

    <p>In order to implement virtual hosts, Apache needs to know
    the local socket address used to accept the connection. It
    is possible to eliminate this call in many situations (such
    as when there are no virtual hosts, or when
    <directive module="mpm_common">Listen</directive> directives
    are used which do not have wildcard addresses). But
    no effort has yet been made to do these optimizations. </p>

<example><pre>
/65:    brk(0x002170E8)                                 = 0
/65:    brk(0x002190E8)                                 = 0
</pre></example>

    <p>The <code>brk(2)</code> calls allocate memory from the heap.
    It is rare to see these in a system call trace, because the httpd
    uses custom memory allocators (<code>apr_pool</code> and
    <code>apr_bucket_alloc</code>) for most request processing.
    In this trace, the httpd has just been started, so it must
    call <code>malloc(3)</code> to get the blocks of raw memory
    with which to create the custom memory allocators.</p>

<example><pre>
/65:    fcntl(9, F_GETFL, 0x00000000)                   = 2
/65:    fstat64(9, 0xFAF7B818)                          = 0
/65:    getsockopt(9, 65535, 8192, 0xFAF7B918, 0xFAF7B910, 2190656) = 0
/65:    fstat64(9, 0xFAF7B818)                          = 0
/65:    getsockopt(9, 65535, 8192, 0xFAF7B918, 0xFAF7B914, 2190656) = 0
/65:    setsockopt(9, 65535, 8192, 0xFAF7B918, 4, 2190656) = 0
/65:    fcntl(9, F_SETFL, 0x00000082)                   = 0
</pre></example>

    <p>Next, the worker thread puts the connection to the client (file
    descriptor 9) in non-blocking mode. The <code>setsockopt(2)</code>
    and <code>getsockopt(2)</code> calls are a side-effect of how
    Solaris's libc handles <code>fcntl(2)</code> on sockets.</p>

<example><pre>
/65:    read(9, " G E T   / 1 0 k . h t m".., 8000)     = 97
</pre></example>

    <p>The worker thread reads the request from the client.</p>

<example><pre>
/65:    stat("/var/httpd/apache/httpd-8999/htdocs/10k.html", 0xFAF7B978) = 0
/65:    open("/var/httpd/apache/httpd-8999/htdocs/10k.html", O_RDONLY) = 10
</pre></example>

    <p>This httpd has been configured with <code>Options FollowSymLinks</code>
    and <code>AllowOverride None</code>.  Thus it doesn't need to
    <code>lstat(2)</code> each directory in the path leading up to the
    requested file, nor check for <code>.htaccess</code> files.
    It simply calls <code>stat(2)</code> to verify that the file:
    1) exists, and 2) is a regular file, not a directory.</p>

<example><pre>
/65:    sendfilev(0, 9, 0x00200F90, 2, 0xFAF7B53C)      = 10269
</pre></example>

    <p>In this example, the httpd is able to send the HTTP response
    header and the requested file with a single <code>sendfilev(2)</code>
    system call. Sendfile semantics vary among operating systems. On some other
    systems, it is necessary to do a <code>write(2)</code> or
    <code>writev(2)</code> call to send the headers before calling
    <code>sendfile(2)</code>.</p>

<example><pre>
/65:    write(4, " 1 2 7 . 0 . 0 . 1   -  ".., 78)      = 78
</pre></example>

    <p>This <code>write(2)</code> call records the request in the
    access log. Note that one thing missing from this trace is a
    <code>time(2)</code> call. Unlike Apache 1.3, Apache 2.0 uses
    <code>gettimeofday(3)</code> to look up the time. On some operating
    systems, like Linux or Solaris, <code>gettimeofday</code> has an
    optimized implementation that doesn't require as much overhead
    as a typical system call.</p>

<example><pre>
/65:    shutdown(9, 1, 1)                               = 0
/65:    poll(0xFAF7B980, 1, 2000)                       = 1
/65:    read(9, 0xFAF7BC20, 512)                        = 0
/65:    close(9)                                        = 0
</pre></example>

    <p>The worker thread does a lingering close of the connection.</p>

<example><pre>
/65:    close(10)                                       = 0
/65:    lwp_park(0x00000000, 0)         (sleeping...)
</pre></example>

    <p>Finally the worker thread closes the file that it has just delivered
    and blocks until the listener assigns it another connection.</p>

<example><pre>
/67:    accept(3, 0x001FEB74, 0x001FEB94, 1) (sleeping...)
</pre></example>

    <p>Meanwhile, the listener thread is able to accept another connection
    as soon as it has dispatched this connection to a worker thread (subject
    to some flow-control logic in the worker MPM that throttles the listener
    if all the available workers are busy).  Though it isn't apparent from
    this trace, the next <code>accept(2)</code> can (and usually does, under
    high load conditions) occur in parallel with the worker thread's handling
    of the just-accepted connection.</p>

  </section>

</manualpage>