In one of my FreeBSD 6.2 jails running Apache, even though the server seemed to respond ok, I saw lots of these errors in the logfile:
[warn] (61)Connection refused: connect to listener on 0.0.0.0:443
Google searching found lots of other people asking about this, but I didn't really see any good answers. Others complained about the same thing on port 80
[warn] (61)Connection refused: connect to listener on 0.0.0.0:80
I think the problem is just that Apache in a jail can't listen to :443 or 0.0.0.0:443 (or :80 or 0.0.0.0:80). If your jail has the IP 18.104.22.168 for example, then in httpd.conf, changing
and/or in extra/httpd-ssl.conf
Seems to fix the problem
Django, SCGI, and AJP
I've been doing a lot with Django lately, and initially set it up using mod_python as the Django docs recommend, but still have some reservations about that kind of arrangement. I'd like to go back to running it under SCGI or something similar.
Django has support builtin for FastCGI, but after trying to install mod_fastcgi in my Apache 2.0.x setup, decided it was a PITA. mod_scgi is quite easy to setup in Apache (even though the documentation is mostly nonexistent). After finding where Django implements its FastCGI support using the flup module, I saw that with just a few minor tweaks Django could be made to support all of flup's protocols, including SCGI and AJP (Apache Jserv Protocol).
AJP turns out to be very interesting because it's included standard with Apache 2.2 as mod_proxy_ajp, and can work with mod_proxy_balancer - meaning you could setup multiple Django instances and have Apache share the load between them.
After testing a bit, I submitted a patch, and will probably switch to running my Django sites as AJP servers managed by daemontools, and frontended by Apache 2.2
mod_python segfault fixed
Just as a followup, it seems the segfault in mod_python on FreeBSD I mentioned before
was found and fixed.
Turns out to not be any kind of pointer/memory corruption like I thought, but rather a mishandled
return code from an APR (Apache Portable Runtime) function. Oh well, I got to play with gdb, ddd, and valgrind a bit, which
is good stuff to be familiar with.
FastCGI, SCGI, and Apache: Background and Future
Ran across Mark Mayo's blog entry: FastCGI, SCGI, and Apache: Background and Future, which
discusses exactly the things I've been struggling with this weekend. I have to agree that sticking an interpreter like Python
directly into Apache is a lot of trouble. I've delved into Apache sourcecode, and the mass of macros and #ifdefs is enough to
send you running away screaming. To try and graft Python onto that is just begging for trouble - and I've had
some experience myself with
grafting interpreters onto other things.
Running your webcode in separate processes just makes a lot of sense. You have much
more freedom with choice of language and version of language. You can easily run
things under different userids, chrooted, in jails/zones, on completely separate
machines, completely separate OSes, maybe within virtual machines running different OSes
on the same hardware.
Anyhow, thought I'd mention this because Mark's writeup made a lot of sense to me and I thought it was worth keeping a link to it.
mod_python segfault on FreeBSD
I've been testing mod_python 3.2.x betas as
by the developers on their mailing list. Unfortunately there
seems to be some subtle memory-related but that only occurs on FreeBSD
(or at least FreeBSD the way I normally install it along with Apache and Python).
Did a lot of poking around the core with gdb and browsing of the mod_python
and Apache sourcecode, but never quite saw where the problem could be. Took another
approach and started stripping down the big mod_python testsuite, and found that the test
that was failing ran fine by itself, but when it ran after another test for handling
large file uploads - then it would crash.
So I suspect there's a problem in a whole different area of mod_python, that's screwing
something up in memory that doesn't trigger a segfault til later during the connectionhandler
test. My latest post
to the list covers some of that.
Running PyBlosxom through SCGI
Out of curiosity, ran the Apache Benchmark program ab on the plain CGI installation of PyBlosxom on my little server (-n 100 -c 10), and got around 1.5 requests/second. Decided to give SCGI a try, and got some better results.
Went about this based on what I had read in Deploying TurboGears
with Lighttpd and SCGI. Tried Lighttpd at first, and it mostly worked, but I've got an Apache setup right now,
so wanted to stick with that for the moment (and it seems a bit quicker anyhow). Basically started by
loading flup with easy_install.
Copied the config.py and wsgi_app.py files from the PyBlosxom distribution
into a directory, and added this little script into that same directory:
Notice how the scriptName and bindAddress parameters in the Python code
are matched in the SCGIMount Apache directive. With this setup, running the same
ab benchmark yields about 10 to 15 requests/second - not too bad. Running the threaded SCGI
server (remove the _fork from the first import line) wasn't as good, only 3 or 8 requests/second.
The setup seems a bit shaky in that the benchmark values seem to keep decreasing with every run,
especially in the threaded mode. So there may be some problems in my setup or in flup/scgi/pyblosxom_wsgi.
Even if it was working fine, SCGI is probably overkill for
running PyBlosxom when you're not expecting a lot of traffic. And if you were, you'd probably
run it with --static to generate static pages. But it was a reasonable
thing to fool with for the day when you want to run a more dynamic WSGI app.