[leafnode-list] Re: Fetchnews messages

Enrico Wiki enrico.c.ml.address at gmail.com
Tue May 19 15:31:02 CEST 2009


On Sat, May 16, 2009 at 6:41 PM, Matthias Andree wrote:

Am 16.05.2009, 12:28 Uhr, schrieb Enrico Wiki :
>
>  On Wed, May 13, 2009 at 10:45 PM, Matthias Andree wrote:
>>
>>
>>  Hi Enrico,
>>>
>>> leafnode isn't made to find magic ways when underlying transports
>>> protocol or physical links fail. Leafnode relies on the operating system to
>>> handle TCP properly (most do this today for the common subset that leafnode
>>> uses); and when connections break, well, that's it.
>>>
>>
>>
>> Of course so. But there are times connections are unstable, come and go,
>> and
>> perhaps some retrievers try harder than others? I don't know, it's just a
>> wild guess of the reason I could not complete a leafnode job, when the
>> connection was rough, but could complete retrieval with some readers.
>>
>
> Possibly. But hey, you are using Unix or a Unix-like operating system, so
> other than with the typical graphical tool that has to integrate all the
> features including retries, you can add features yourself, see below.
>
>  Having said that, I know the real culprit was my connection, like I said.
>> Now the connection is fine and leafnode is doing very well.
>>
>
> The obvious workaround will be do tell fetchnews to retry, by for instance,
> along these lines as a cron entry:



OK, thanks for the hints.  Actually, more than looking forward to a
practical result (for the moment being), I am testing leafnode in order to
understand what it does and how it does it,  including under stress
situations, so to speak.  :-)


>
>
> 17 * * * * while ! fetchnews [--options] ; do sleep 300 ; done
>
> This will poll hourly at X:17 o'clock, and after failure retry after
> sleeping for 300 seconds. Watch your logs though...
>


That will retry after failure but not after success, right?



>  Perhaps - but usually a sign that the upstream server's database is
>>> corrupt/inconsistent, particularly the overview data doesn't match the
>>> available articles: the XOVER command offers articles that aren't
>>> available any more.
>>>
>>
>> Ok. I tried both xover and xhdr, same result.
>>
>
> They would be unlikely to differ among each others, as both will access the
> overview database, which is different from the article database in many
> implementations - including leafnode (although leafnode can afford to fix
> inconsistencies on the assumption that the overall load is lower, so you'll
> usually see consistent data): there are the message.id and group
> directories, and there are the .overview files in the group directories...


BTW: if the upstream server supports xhdr, would you recommend using it for
fetchnews, in terms of speed?


>
>
>  Well, no, it wasn't aborted.
>> I tried again (starting from scratch) and had the same results.
>>
>
> Have some of the articles been crossposted to several groups? Then leafnode
> will have fetched it for one of the groups, and when it's listed in another,
> it will not download another copy, because it already has it.


Nope, no crossposted articles, in that case. Just

*"store: duplicate article "

and in the same number as the "killed" articles.


*



>
> HTH


Thanks!

-- 
Enrico



More information about the leafnode-list mailing list