--===============0483550046==
Content-Type: multipart/alternative;
boundary="=====================_359895546==.ALT"

--=====================_359895546==.ALT
Content-Type: text/plain; charset="us-ascii"; format=flowed

At 11:45 AM 9/14/2006, Oliver Cruickshank wrote:
> >So is your FIFO-reader crashing? Or are you working out your

> failure-recovery options?
>I'm really just trying to make stuff bullet proof. Our server uses active
>/ passive clustering and during the last failover test the xferlog reader
>did not start properly. As a result our support guys had to check folders
>for what messages had been received and manually process them.


Ahh, additional requirements, if multiple servers are needing to
monitor the info!

> >Well, this _is_ a flaw - not captured anywhere?

>do you mean the flaw is in proftpd not writing to the xferlog? Possibly
>its got some time out when writing to logs so that it doesn't wait forever,
>but it does wait forever when a user logs in and there is no xferlog reader
>- confusing!


I was thinking more in terms of worry that _anything_ could block
recording of (vital) information. Which is why I elected to keep
using plain xferlog logfiles, and not put any of my own (fallible)
code in the way.

> >To point out why this seems snide (and probably is), I don't use FIFO's
> >currently. I do everything using an xferlog _logfile_ reader. It
> >passively trails the log, detecting when new entries are added and acting
> >on that information.

>
>Not snidey at all! I'm keen to know how other people have implemented
>processing the xferlog. I reckoned trying determine where the pointer was
>looking at in the xferlog file was too complicated especially when I need
>to handle failover situations (possible I could put the xferlog file on a
>shared disk and fail that over too?) and archiving. Simple destructive
>reads was easiest for our developers at the time (don't need to worry about
>missing data, archiving, which record we're on etc.).
>
>If I don't find a resolution to xferlog FIFO, I'll probably look at
>implementing a xferlog _logfile_ reader though - did you build it yourself?
>or use tools off the Internet?


Home-grown, and ugly to boot. It does try to do all that tracking by
updating a file, writing out last logfile read position, logfile size
(to catch logfile rotations), last modification time, and (I think)
last log entry time stamp (hmm, no, it doesn't do this last one and
should). And it does break occasionally, but so long ago I'd have to check.

One reason I wanted it to be separate is we were still defining
things, like which upload directories would be 'magic'. With
different dir names for different users (yes, ick).

While also handling such magic as figuring out that when the log entry reads:
Thu Sep 14 12:27:21 2006 0 ftp 135
/varoptbudg/ftphome/tom/tom_at_budg.txt b _ i r tom ftp 0 * c
that the filename might *really* be using spaces:
ftp> put tom_at_budg.txt "tom at budg.txt"
local: tom_at_budg.txt remote: tom at budg.txt
-rw-rw---- 1 budgftp budgftp 135 Sep 14 12:27 tom at budg.txt
(xferlog entry filepaths can't have spaces as that would mess up
parsers, so you end up having to fuzzy match, sometimes)

Another bit was that things like the above were specific to ProFTPD,
and the whole application could take in 'work' from other "intake
points" (like email attachments). So this was just one of several
components that were reacting to their incoming data streams and then
writing "work item" entries into yet another queue dir, which a
further component would pick up and schedule. (each work item is a
separate file in that directory) (Did I mention I like de-coupled components?)

Oh, and we had switched over at one point from Windows to Linux, and
the old FTP server (Serv-U) didn't have anything except for text logfiles.

Anyway, with requirements changing and bugs popping up, we definitely
wanted to be able to independently swap this component in and out
again, without impacting ProFTPD. And so we passively 'trail' the
plaintext xferlog logfile.

How I would do this if I _had_ failover machines I'd have to
consider. I still _really_ like using plain text files for
communicating between components. (Our rates for incoming events
aren't too high for that simple idea to still be effective) At one
point we were going to have shared RAID, but didn't get to test much
before they took that equipment away. I'm sure there are pitfalls
with every method of sharing files. (I get queasy feelings at the
acronyms NFS and NIS, for example)

>thanks,
>
>--
>Olly Cruickshank


--=====================_359895546==.ALT
Content-Type: text/html; charset="us-ascii"



At 11:45 AM 9/14/2006, Oliver Cruickshank wrote:

>So is your FIFO-reader
crashing?  Or are you working out your failure-recovery
options?

I'm really just trying to make stuff bullet proof.  Our server uses
active

/ passive clustering and during the last failover test the xferlog
reader

did not start properly.  As a result our support guys had to check
folders

for what messages had been received and manually process
them.


Ahh, additional requirements, if multiple servers are needing to monitor
the info!


>Well, this _is_
a flaw - not captured anywhere?

do you mean the flaw is in proftpd not writing to the xferlog? 
Possibly

its got some time out when writing to logs so that it doesn't wait
forever,

but it does wait forever when a user logs in and there is no xferlog
reader

- confusing!


I was thinking more in terms of worry that _anything_ could block
recording of (vital) information.  Which is why I elected to keep
using plain xferlog logfiles, and not put any of my own (fallible) code
in the way.


>To point out
why this seems snide (and probably is), I don't use FIFO's

>currently.  I do everything using an xferlog _logfile_
reader.  It

>passively trails the log, detecting when new entries are added and
acting

>on that information.


Not snidey at all!  I'm keen to know how other people have
implemented

processing the xferlog.  I reckoned trying determine where the
pointer was

looking at in the xferlog file was too complicated especially when I
need

to handle failover situations (possible I could put the xferlog file on
a

shared disk and fail that over too?) and archiving.  Simple
destructive

reads was easiest for our developers at the time (don't need to worry
about

missing data, archiving, which record we're on etc.).


If I don't find a resolution to xferlog FIFO, I'll probably look at

implementing a xferlog _logfile_ reader though - did you build it
yourself?

or use tools off the Internet?


Home-grown, and ugly to boot.  It does try to do all that tracking
by updating a file, writing out last logfile read position, logfile size
(to catch logfile rotations), last modification time, and (I think) last
log entry time stamp (hmm, no, it doesn't do this last one and
should).  And it does break occasionally, but so long ago I'd have
to check.


One reason I wanted it to be separate is we were still defining things,
like which upload directories would be 'magic'.  With different dir
names for different users (yes, ick). 


While also handling such magic as figuring out that when the log entry
reads:

    Thu Sep 14 12:27:21 2006 0 ftp 135
/varoptbudg/ftphome/tom/tom_at_budg.txt b _ i r tom ftp 0 * c

that the filename might *really* be using spaces:

    ftp> put tom_at_budg.txt "tom at
budg.txt"

    local: tom_at_budg.txt remote: tom at budg.txt

      -rw-rw----   1 budgftp 
budgftp       135 Sep 14 12:27 tom at
budg.txt

(xferlog entry filepaths can't have spaces as that would mess up parsers,
so you end up having to fuzzy match, sometimes)


Another bit was that things like the above were specific to ProFTPD, and
the whole application could take in 'work' from other "intake
points" (like email attachments).  So this was just one of
several components that were reacting to their incoming data streams and
then writing "work item" entries into yet another queue dir,
which a further component would pick up and schedule.  (each work
item is a separate file in that directory)  (Did I mention I like
de-coupled components?)


Oh, and we had switched over at one point from Windows to Linux, and the
old FTP server (Serv-U) didn't have anything except for text
logfiles.


Anyway, with requirements changing and bugs popping up, we definitely
wanted to be able to independently swap this component in and out again,
without impacting ProFTPD.   And so we passively 'trail' the
plaintext xferlog logfile. 


How I would do this if I _had_ failover machines I'd have to
consider.  I still _really_ like using plain text files for
communicating between components.  (Our rates for incoming events
aren't too high for that simple idea to still be effective)  At one
point we were going to have shared RAID, but didn't get to test much
before they took that equipment away.  I'm sure there are pitfalls
with every method of sharing files.  (I get queasy feelings at the
acronyms NFS and NIS, for example)


thanks,


--

Olly Cruickshank



--=====================_359895546==.ALT--


--===============0483550046==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=...057&dat=121642
--===============0483550046==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
ProFTPD Users List
Unsubscribe problems?
http://www.proftpd.org/list-unsub.html
--===============0483550046==--