frequent disk activity - Mandriva

This is a discussion on frequent disk activity - Mandriva ; In observing a graph of disk activity for my 2008.0 x86_64 system (a raid 0 pair of sata drives) I notice that there is a very brief access every few seconds, the graph looks like fairly evenly spaced vertical toohpicks. ...

+ Reply to Thread
Results 1 to 7 of 7

Thread: frequent disk activity

  1. frequent disk activity

    In observing a graph of disk activity for my 2008.0 x86_64 system (a raid 0
    pair of sata drives) I notice that there is a very brief access every few
    seconds, the graph looks like fairly evenly spaced vertical toohpicks.
    How can i identify what is accessing the drive? I'd like to do something
    like:
    # set the raid0 array's spindown to 10 minutes of no activity
    hdparm -S 120 /dev/sda
    hdparm -S 120 /dev/sdb

    I'm not sure if you can do this with raid 0 but even if i can, the constant
    activity will for sure, never let it spin down.

    One idea i had -
    My fstab looks like this, can i replace relatime with noatime? will that
    help, would it have other unintended consequences?

    /dev/mapper/isw_ejcaedbic_Vol05 / xfs relatime 1 1
    /dev/mapper/isw_ejcaedbic_Vol01 /boot xfs relatime 1 2
    /dev/mapper/isw_ejcaedbic_Vol09 /home xfs relatime 1 2
    none /proc proc defaults 0 0
    /dev/mapper/isw_ejcaedbic_Vol06 /tmp xfs relatime 1 2
    /dev/mapper/isw_ejcaedbic_Vol07 /usr xfs relatime 1 2
    /dev/mapper/isw_ejcaedbic_Vol08 /var xfs relatime 1 2

    Thanks
    Eric



  2. Re: frequent disk activity

    Eric wrote:
    > In observing a graph of disk activity for my 2008.0 x86_64 system (a raid 0
    > pair of sata drives) I notice that there is a very brief access every few
    > seconds, the graph looks like fairly evenly spaced vertical toohpicks.
    > How can i identify what is accessing the drive? I'd like to do something
    > like:
    > # set the raid0 array's spindown to 10 minutes of no activity
    > hdparm -S 120 /dev/sda
    > hdparm -S 120 /dev/sdb
    >
    > I'm not sure if you can do this with raid 0 but even if i can, the constant
    > activity will for sure, never let it spin down.
    >
    > One idea i had -
    > My fstab looks like this, can i replace relatime with noatime? will that
    > help, would it have other unintended consequences?
    >
    > /dev/mapper/isw_ejcaedbic_Vol05 / xfs relatime 1 1
    > /dev/mapper/isw_ejcaedbic_Vol01 /boot xfs relatime 1 2
    > /dev/mapper/isw_ejcaedbic_Vol09 /home xfs relatime 1 2
    > none /proc proc defaults 0 0
    > /dev/mapper/isw_ejcaedbic_Vol06 /tmp xfs relatime 1 2
    > /dev/mapper/isw_ejcaedbic_Vol07 /usr xfs relatime 1 2
    > /dev/mapper/isw_ejcaedbic_Vol08 /var xfs relatime 1 2


    Eric,

    I am completely ignorant of RAID arrays, but I remember that Peter T.
    Breuer is interested in RAID arrays. I helped drum him out of this
    newsgroup, but if you contact him and display your cluefulness, he
    might help you.

  3. Re: frequent disk activity

    Scott B. wrote:
    > Eric,
    >
    > I am completely ignorant of RAID arrays, but I remember that Peter T.
    > Breuer is interested in RAID arrays. I helped drum him out of this
    > newsgroup


    Ugh. I helped drum Peter T. Breuer out of alt.os.linux.mandrake, not
    this newsgroup.

  4. Re: frequent disk activity

    Eric wrote:

    > In observing a graph of disk activity for my 2008.0 x86_64 system (a raid
    > 0 pair of sata drives) I notice that there is a very brief access every
    > few seconds, the graph looks like fairly evenly spaced vertical toohpicks.
    > How can i identify what is accessing the drive?


    man lsof

    man fuser

    > I'd like to do something
    > like:
    > # set the raid0 array's spindown to 10 minutes of no activity
    > hdparm -S 120 /dev/sda
    > hdparm -S 120 /dev/sdb


    I wouldn't advise that. In my humble opinion, I don't think it is wise top
    spin down your disks on a RAID configuration as filesystem access on RAID
    is by definition non-atomic, i.e. you've got two separate disks handling
    I/O to the same yet striped filesystem.

    Secondly and also in my humble opinion, having the disks spin down and back
    up again is good way to wear out your disks prematurely. With most hard
    disk designs - except for those designed by IBM/Hitachi - the read/write
    heads are parked on the innermost cylinder when the disk is spinning down.
    As such, there is friction between the disk heads and the platter
    surface(s) when the disk is spinning down and when it is spinning up.

    In the latter case, this also puts more stress on the spindle motor when
    it's attempting to rev up the disk again. Additionally, repeated start &
    stop cycles also cause additional wear to the spindle motor.

    While it is true that non-enterprisegrade SATA and IDE disks are only
    validated for some 8 hours of operation per day, of which only some 2 hours
    in total under heavy load, I definitely advise against spinning down your
    disks while the computer is up and running. They should spin up when you
    boot your computer and spin down when you shut it off, but you'll get the
    best performance, the best longevity _and_ the most economic power
    consumption[1] if you keep the disks spinning for as long as the computer
    is up and running.

    [1] Spinning the disks up requires a _lot_ more power than keeping them
    spinning at the same rate continuously.

    > I'm not sure if you can do this with raid 0 but even if i can, the
    > constant activity will for sure, never let it spin down.


    Whether it is technically possible to spin down your disks depends on the
    disk controller. SATA and PATA can be spun down, but SCSI and SAS
    typically do not support this.

    > One idea i had - My fstab looks like this, can i replace relatime with
    > noatime? will that help, would it have other unintended consequences?
    >
    > /dev/mapper/isw_ejcaedbic_Vol05 / xfs relatime 1 1
    > /dev/mapper/isw_ejcaedbic_Vol01 /boot xfs relatime 1 2
    > /dev/mapper/isw_ejcaedbic_Vol09 /home xfs relatime 1 2
    > none /proc proc defaults 0 0
    > /dev/mapper/isw_ejcaedbic_Vol06 /tmp xfs relatime 1 2
    > /dev/mapper/isw_ejcaedbic_Vol07 /usr xfs relatime 1 2
    > /dev/mapper/isw_ejcaedbic_Vol08 /var xfs relatime 1 2


    According to Linus Torvalds, the /atime/ mount option is the most serious
    performance-degrading factor in terms of disk I/O; it would slow down your
    I/O by about 10%. It's also quite deprecated, although some programs still
    use it. /mutt/ is one of them, /tmpwatch/ is another.

    It is much more advisable to use /noatime/ or /relatime/ instead. The
    difference is that /relatime/ will only update the /atime/ field in the
    inode if the /mtime/ or /ctime/ have an older timestamp, while /noatime/
    will totally omit updating of the /atime/ field.

    I myself normally only use /noatime/ on my filesystems, and I haven't had
    any compatibility problems with it yet. ;-)

    --
    *Aragorn*
    (registered GNU/Linux user #223157)

  5. Re: frequent disk activity

    On Tue, 06 May 2008 02:41:44 -0400, Eric wrote:

    > In observing a graph of disk activity for my 2008.0 x86_64 system (a raid 0
    > pair of sata drives) I notice that there is a very brief access every few
    > seconds, the graph looks like fairly evenly spaced vertical toohpicks.
    > How can i identify what is accessing the drive? I'd like to do something


    Install and run htop. Press f6 and select sort by cpu. Watch it for
    a while to see which processes are actually running.

    Regards, Dave Hodgins

    --
    Change nomail.afraid.org to ody.ca to reply by email.
    (nomail.afraid.org has been set up specifically for
    use in usenet. Feel free to use it yourself.)

  6. Re: frequent disk activity

    See my responses inline...
    Eric

    Aragorn wrote:

    > Eric wrote:
    >
    >> In observing a graph of disk activity for my 2008.0 x86_64 system (a raid
    >> 0 pair of sata drives) I notice that there is a very brief access every
    >> few seconds, the graph looks like fairly evenly spaced vertical toohpicks.
    >> How can i identify what is accessing the drive?

    >
    > man lsof
    >
    > man fuser
    >
    >> I'd like to do something
    >> like:
    >> # set the raid0 array's spindown to 10 minutes of no activity
    >> hdparm -S 120 /dev/sda
    >> hdparm -S 120 /dev/sdb

    >
    > I wouldn't advise that. In my humble opinion, I don't think it is wise top
    > spin down your disks on a RAID configuration as filesystem access on RAID
    > is by definition non-atomic, i.e. you've got two separate disks handling
    > I/O to the same yet striped filesystem.


    You might have a good point here, but the timeout occurs after some time, T,
    of no activity at all on the disk. Also, shouldnt the controller handle this?


    > Secondly and also in my humble opinion, having the disks spin down and back
    > up again is good way to wear out your disks prematurely.


    Ok, at what point is it better to spin them up and down? If the non-use time
    is 1 hour? probably not, after 1 day - I'd say yes. In-between, well, i'd say
    the closer you are to the low end of the scale the more likely it would be to
    just let the drive run,the opposite if your idle/non-use time is towards the
    1 day mark.

    > With most hard
    > disk designs - except for those designed by IBM/Hitachi - the read/write
    > heads are parked on the innermost cylinder when the disk is spinning down.
    > As such, there is friction between the disk heads and the platter
    > surface(s) when the disk is spinning down and when it is spinning up.
    >


    The heads never touch the platters - ever, they hover over them. touching the
    head to the platter destroys the head and the platter.

    > In the latter case, this also puts more stress on the spindle motor when
    > it's attempting to rev up the disk again. Additionally, repeated start &
    > stop cycles also cause additional wear to the spindle motor.
    >
    > While it is true that non-enterprisegrade SATA and IDE disks are only
    > validated for some 8 hours of operation per day, of which only some 2 hours
    > in total under heavy load, I definitely advise against spinning down your
    > disks while the computer is up and running. They should spin up when you
    > boot your computer and spin down when you shut it off, but you'll get the
    > best performance, the best longevity _and_ the most economic power
    > consumption[1] if you keep the disks spinning for as long as the computer
    > is up and running.
    >
    > [1] Spinning the disks up requires a _lot_ more power than keeping them
    > spinning at the same rate continuously.


    It depends entirely on how much power they draw at spin-up, how much they use
    when running normally and for how long they run. similar to the old saw about
    light bulbs - dont turn it off it uses more power to start it up than if you
    just left it on - which again is totally false, unless of course you only
    use the light bulb for a few milliseconds at a time.
    >
    >> I'm not sure if you can do this with raid 0 but even if i can, the
    >> constant activity will for sure, never let it spin down.

    >
    > Whether it is technically possible to spin down your disks depends on the
    > disk controller. SATA and PATA can be spun down, but SCSI and SAS
    > typically do not support this.
    >
    >> One idea i had - My fstab looks like this, can i replace relatime with
    >> noatime? will that help, would it have other unintended consequences?
    >>
    >> /dev/mapper/isw_ejcaedbic_Vol05 / xfs relatime 1 1
    >> /dev/mapper/isw_ejcaedbic_Vol01 /boot xfs relatime 1 2
    >> /dev/mapper/isw_ejcaedbic_Vol09 /home xfs relatime 1 2
    >> none /proc proc defaults 0 0
    >> /dev/mapper/isw_ejcaedbic_Vol06 /tmp xfs relatime 1 2
    >> /dev/mapper/isw_ejcaedbic_Vol07 /usr xfs relatime 1 2
    >> /dev/mapper/isw_ejcaedbic_Vol08 /var xfs relatime 1 2

    >
    > According to Linus Torvalds, the /atime/ mount option is the most serious
    > performance-degrading factor in terms of disk I/O; it would slow down your
    > I/O by about 10%. It's also quite deprecated, although some programs still
    > use it. /mutt/ is one of them, /tmpwatch/ is another.
    >
    > It is much more advisable to use /noatime/ or /relatime/ instead. The
    > difference is that /relatime/ will only update the /atime/ field in the
    > inode if the /mtime/ or /ctime/ have an older timestamp, while /noatime/
    > will totally omit updating of the /atime/ field.
    >
    > I myself normally only use /noatime/ on my filesystems, and I haven't had
    > any compatibility problems with it yet. ;-)
    >

    I'm about to try it, and i agree with your assessment here, using the atime
    param is asking the system to write to disk on every file access. relatime
    might even be similar, so i'll try the noatime and see what cooks.
    I found this on the net about relatime:
    "relative atime only updates the atime if the previous atime is older than the
    mtime or ctime. Like noatime, but useful for applications like mutt that need
    to know when a file has been read since it was last modified."
    Eric


  7. Re: frequent disk activity

    Scott B. wrote:

    > Scott B. wrote:
    >> Eric,
    >>
    >> I am completely ignorant of RAID arrays, but I remember that Peter T.
    >> Breuer is interested in RAID arrays. I helped drum him out of this
    >> newsgroup

    >
    > Ugh. I helped drum Peter T. Breuer out of alt.os.linux.mandrake, not
    > this newsgroup.


    I remember that name :-)
    Maybe I've hung around here too long, I've been mandrak-ing since 8.x
    was it 8.2? something like that....
    Eric


+ Reply to Thread