[leafnode-list] Re: leafnode creating multi-megabyte log files )-:
Matthias Andree
matthias.andree at gmx.de
Thu May 19 11:07:19 CEST 2011
Am 16.05.2011 12:58, schrieb Arthur Marsh:
> Hi, I'm running leafnode from Debian unstable:
>
> ii leafnode 1.11.8-1
>
> My config file is:
>
> victoria:/etc/news/leafnode# grep -v \# config|uniq
> expire = 15
>
> server = news.internode.on.net
> server = news.gmane.org
> only_groups_pcre = gmane\.
> server = news.mozilla.org
> only_groups_pcre = mozilla\.
> nodesc = 1
>
> debugmode = 0
>
> groupexpire comp.risks = 365
> groupexpire gmane.comp.graphics.opengraphics = 31
> groupexpire comp.protocols.kermit.misc = 365
>
> maxcrosspost = 6
>
> maxage = 15
>
> filterfile = /etc/news/leafnode/filters
>
> article_despite_filter = 1
>
> hostname = ppp121-45-136-118.lns11.adl6.internode.on.net
>
> and my filters file is:
>
> victoria:/etc/news/leafnode# more filters
> ^Newsgroups:.*[, ]gmane.spam.detected$
> ^Newsgroups:.*[, ]gmane.spam.detected,
>
> I'm also running logcheck and it is spending hours processing log files
> of the form:
>
> May 16 14:05:59 victoria fetchnews[24378]: gmane.test: killed 3423
> (<el9g92$uv5$
> 1 at sea.gmane.org>), too old (1621 > 15) days
> May 16 14:05:59 victoria fetchnews[24378]: gmane.test: killed 3424
> (<ela7ta$7g0$
> 1 at sea.gmane.org>), too old (1621 > 15) days
>
> Is there any straightforward way to stop leafnode writing such log files
> to begin with?
You haven't shown the fetchnews command line (from shell or cron) that
triggers this kind of logging. The -x option is notorious for that,
naturally :)
Also, if your fetchnews runs do not complete successfully or cannot
update (= write to) the last-fetched records in the server-status files
(.../leaf.node/news.gmane.org), that situation can cause such logging.
More information about the leafnode-list
mailing list