hard drive size on a RAID1 array - Hardware

This is a discussion on hard drive size on a RAID1 array - Hardware ; Hi all, I have two 500 GB hard drives that are configured in a raid 1 array. They are partitioned into two: a 20gb partition and a 480gb partition (with 2gb of swap). My problem is as follows: When I ...

+ Reply to Thread
Results 1 to 19 of 19

Thread: hard drive size on a RAID1 array

  1. hard drive size on a RAID1 array

    Hi all,

    I have two 500 GB hard drives that are configured in a raid 1 array.
    They are partitioned into two: a 20gb partition and a 480gb partition
    (with 2gb of swap).

    My problem is as follows:

    When I do df -h I get the following:
    Filesystem Size Used Avail Use% Mounted on
    /dev/md0 19G 9.8G 7.8G 56% /
    varrun 1013M 128K 1013M 1% /var/run
    varlock 1013M 0 1013M 0% /var/lock
    udev 1013M 80K 1013M 1% /dev
    devshm 1013M 0 1013M 0% /dev/shm
    lrm 1013M 34M 979M 4% /lib/modules/2.6.22-14-
    generic/volatile

    Where is my other partition??

    I tried using fdisk (but I don't really know how to use it that well)
    And I print the following:

    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 60557 486424071 fd Linux raid
    autodetect
    /dev/sdb2 60558 60801 1959930 fd Linux raid
    autodetect

    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 60557 486424071 fd Linux raid
    autodetect
    /dev/sda2 60558 60801 1959930 fd Linux raid
    autodetect



    Further information:

    $ cat /etc/fstab
    # /etc/fstab: static file system information.
    #
    #
    proc /proc proc defaults 0 0
    # /dev/md0
    UUID=3130e988-88a8-4e87-8b85-7a267904a370 / ext3
    defaults,errors=remount-ro 0 1
    # /dev/md1
    UUID=8e35bf5c-6501-49fa-960e-ac3f589ffa1b none swap
    sw 0 0
    /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0
    /dev/fd0 /media/floppy0 auto rw,user,noauto,exec 0 0

    $ swapon -s
    Filename Type Size
    Used Priority
    /dev/md1 partition 1959800
    34756 -1

    $ free
    total used free shared buffers
    cached
    Mem: 2074372 1124088 950284 0 80548
    909980
    -/+ buffers/cache: 133560 1940812
    Swap: 1959800 34756 1925044

    $ lsmod
    Module Size Used by
    af_packet 24840 2
    vmnet 39092 13
    vmmon 1825708 8
    rfcomm 42136 2
    l2cap 26240 11 rfcomm
    bluetooth 57060 4 rfcomm,l2cap
    ppdev 10244 0
    ipv6 273892 14
    acpi_cpufreq 10568 1
    cpufreq_stats 7232 0
    cpufreq_ondemand 9612 1
    cpufreq_conservative 8072 0
    freq_table 5792 3
    acpi_cpufreq,cpufreq_stats,cpufreq_ondemand
    cpufreq_userspace 5280 0
    cpufreq_powersave 2688 0
    sbs 19592 0
    video 18060 0
    ac 6148 0
    button 8976 0
    container 5504 0
    dock 10656 0
    battery 11012 0
    lp 12580 0
    loop 19076 0
    psmouse 39952 0
    parport_pc 37412 1
    parport 37448 3 ppdev,lp,parport_pc
    pcspkr 4224 0
    shpchp 34580 0
    pci_hotplug 32704 1 shpchp
    evdev 11136 2
    ext3 133896 1
    jbd 60456 1 ext3
    mbcache 9732 1 ext3
    sg 36764 0
    sr_mod 17828 1
    cdrom 37536 1 sr_mod
    sd_mod 30336 6
    usbhid 29536 0
    hid 28928 1 usbhid
    ata_generic 8452 0
    floppy 60004 0
    ahci 23300 4
    pata_it8213 9348 1
    e1000 126272 0
    libata 125168 3 ata_generic,ahci,pata_it8213
    scsi_mod 147084 4 sg,sr_mod,sd_mod,libata
    ehci_hcd 36492 0
    uhci_hcd 26640 0
    usbcore 138632 4 usbhid,ehci_hcd,uhci_hcd
    raid10 26496 0
    raid456 128016 0
    xor 16904 1 raid456
    raid1 25984 2
    raid0 9728 0
    multipath 9984 0
    linear 7552 0
    md_mod 82324 9
    raid10,raid456,raid1,raid0,multipath,linear
    thermal 14344 0
    processor 32072 2 acpi_cpufreq,thermal
    fan 5764 0
    fuse 47124 1
    apparmor 40728 0
    commoncap 8320 1 apparmor

    $ du -h --max-depth=1 /
    156M /lib
    4.0K /srv
    4.0K /opt
    236K /dev
    4.7M /bin
    12K /media
    0 /sys
    40K /root
    4.0K /initrd
    0 /proc
    7.9M /home
    9.5M /etc
    322M /var
    2.1G /usr
    18M /boot
    16K /lost+found
    6.4M /sbin
    20K /tmp
    4.0K /mnt
    2.6G /

    $ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
    [raid4] [raid10]
    md1 : active raid1 sda2[0] sdb2[1]
    1959808 blocks [2/2] [UU]

    md0 : active raid1 sda1[0] sdb1[1]
    486424000 blocks [2/2] [UU]

    unused devices:

    Very strange is the following:

    # mount /dev/md0
    mount: /dev/md0 already mounted or / busy
    mount: according to mtab, /dev/md0 is already mounted on /

    # mount /dev/md1
    mount: mount point none does not exist
    (must be swap?)

    # fdisk /dev/md1
    p

    Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    w(rite)

    Command (m for help): p

    Disk /dev/md1: 2006 MB, 2006843392 bytes
    2 heads, 4 sectors/track, 489952 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Disk identifier: 0xd436a49e

    Device Boot Start End Blocks Id System

    # fdisk /dev/md0
    p
    Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    w(rite)

    Command (m for help): p

    Disk /dev/md0: 498.0 GB, 498098176000 bytes
    2 heads, 4 sectors/track, 121606000 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Disk identifier: 0xfe9d38d6

    Device Boot Start End Blocks Id System

    So, according to this, / should be 498 GB..
    But it's only 20..

    Any ideas?

  2. Re: hard drive size on a RAID1 array

    Diego wrote:

    > Hi all,
    >
    > I have two 500 GB hard drives that are configured in a raid 1 array.
    > They are partitioned into two: a 20gb partition and a 480gb partition
    > (with 2gb of swap).


    That would be three partitions then? A swap partition is a partition too,
    and on /x86/ it cannot be larger than ~2 GB; you can however use multiple
    swap partitions and as such increase the amount of swap space available to
    the kernel.

    > My problem is as follows:
    >
    > When I do df -h I get the following:
    > Filesystem Size Used Avail Use% Mounted on
    > /dev/md0 19G 9.8G 7.8G 56% /
    > varrun 1013M 128K 1013M 1% /var/run
    > varlock 1013M 0 1013M 0% /var/lock
    > udev 1013M 80K 1013M 1% /dev
    > devshm 1013M 0 1013M 0% /dev/shm
    > lrm 1013M 34M 979M 4% /lib/modules/2.6.22-14-
    > generic/volatile
    >
    > Where is my other partition??


    See above... And see further down...: a swap partition is not listed in
    a /df/ output.

    > I tried using fdisk (but I don't really know how to use it that well)
    > And I print the following:
    >
    > Device Boot Start End Blocks Id System
    > /dev/sdb1 * 1 60557 486424071 fd Linux raid
    > autodetect
    > /dev/sdb2 60558 60801 1959930 fd Linux raid
    > autodetect
    >
    > Device Boot Start End Blocks Id System
    > /dev/sda1 * 1 60557 486424071 fd Linux raid
    > autodetect
    > /dev/sda2 60558 60801 1959930 fd Linux raid
    > autodetect


    So you've created two partitions on each disk, not three... But read on...

    > Further information:
    >
    > $ cat /etc/fstab
    > # /etc/fstab: static file system information.
    > #
    > #
    > proc /proc proc defaults 0 0
    > # /dev/md0
    > UUID=3130e988-88a8-4e87-8b85-7a267904a370 / ext3
    > defaults,errors=remount-ro 0 1


    The above is your first metadevice, i.e. your first RAID 1 volume, so to
    speak. You've mounted it on the root directory.

    > # /dev/md1
    > UUID=8e35bf5c-6501-49fa-960e-ac3f589ffa1b none swap
    > sw 0 0


    The above is your second metadevice, i.e. your second RAID 1 volume, which
    you've set up as the swap partition.

    Mirroring a swap partition is not really helpful, in my humble opinion.
    You'd been better off making a single swap partition on each drive at about
    1 GB in size, and using them with equal priority in */etc/fstab,* which
    would effectively make them into a stripe.

    > /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0
    > /dev/fd0 /media/floppy0 auto rw,user,noauto,exec 0 0
    >
    > $ swapon -s
    > Filename Type Size
    > Used Priority
    > /dev/md1 partition 1959800
    > 34756 -1


    So far so good - at least, for your intended use.

    > $ free
    > total used free shared buffers
    > cached
    > Mem: 2074372 1124088 950284 0 80548
    > 909980
    > -/+ buffers/cache: 133560 1940812
    > Swap: 1959800 34756 1925044


    Your kernel is obviously paging out to the swap partition, so you know it's
    being used.

    > [...]




    > $ du -h --max-depth=1 /
    > 156M /lib
    > 4.0K /srv
    > 4.0K /opt
    > 236K /dev
    > 4.7M /bin
    > 12K /media
    > 0 /sys
    > 40K /root
    > 4.0K /initrd
    > 0 /proc
    > 7.9M /home
    > 9.5M /etc
    > 322M /var
    > 2.1G /usr
    > 18M /boot
    > 16K /lost+found
    > 6.4M /sbin
    > 20K /tmp
    > 4.0K /mnt
    > 2.6G /


    The above only tells you how much space a directory takes up, not how much
    is available.

    > $ cat /proc/mdstat
    > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
    > [raid4] [raid10]
    > md1 : active raid1 sda2[0] sdb2[1]
    > 1959808 blocks [2/2] [UU]
    >
    > md0 : active raid1 sda1[0] sdb1[1]
    > 486424000 blocks [2/2] [UU]
    >
    > unused devices:
    >
    > Very strange is the following:
    >
    > # mount /dev/md0
    > mount: /dev/md0 already mounted or / busy
    > mount: according to mtab, /dev/md0 is already mounted on /


    Nothing strange about this. According to */etc/fstab,* that's your root
    filesystem.

    > # mount /dev/md1
    > mount: mount point none does not exist
    > (must be swap?)


    You cannot mount a swap partition via the /mount/ command. It's error
    message is as designed: you are telling it to mount a block device, so it
    parses */etc/fstab* to see what mountpoint it must use, and all it finds is
    the word "none". Therefore it looks for the directory "./none" - i.e. a
    directory named "none" in the current working directory - which obviously
    does not exist.

    I don't even know what would happen if that mountpoint *were* to actually
    exist, so perhaps you're lucky that it halts its execution at the failure
    of finding the mountpoint.

    > # fdisk /dev/md1
    > p
    >
    > Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    > w(rite)
    >
    > Command (m for help): p
    >
    > Disk /dev/md1: 2006 MB, 2006843392 bytes
    > 2 heads, 4 sectors/track, 489952 cylinders
    > Units = cylinders of 8 * 512 = 4096 bytes
    > Disk identifier: 0xd436a49e
    >
    > Device Boot Start End Blocks Id System
    >
    > # fdisk /dev/md0
    > p
    > Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    > w(rite)
    >
    > Command (m for help): p
    >
    > Disk /dev/md0: 498.0 GB, 498098176000 bytes
    > 2 heads, 4 sectors/track, 121606000 cylinders
    > Units = cylinders of 8 * 512 = 4096 bytes
    > Disk identifier: 0xfe9d38d6
    >
    > Device Boot Start End Blocks Id System
    >
    > So, according to this, / should be 498 GB..
    > But it's only 20..
    >
    > Any ideas?


    I have very little experience with Linux software RAID, but there are two
    things I can think of that could have gone wrong.

    The first and simplest thing would be that you've created a filesystem of
    only 20 GB in size when you should have created one that fills up your
    entire partition. Just because your partition is 468 GB in size doesn't
    mean that your filesystem is. The solution would be to resize the
    filesystem using the appropriate tools and precautions - e.g. some
    filesystems prefer being mounted when resized, others have to be unmounted
    first, which in your case, the pertaining filesystem being the root
    filesystem, you will need to do that from a Live CD.

    The second thing - and this is where I am only offering a suggestion as I
    don't have the expertise and I may be wrong - would be that you possibly
    designated the incorrect filesystem type when creating your partitions,
    which could explain why the /fdisk/ report above doesn't show you any
    statistics on */dev/md0* and why there is a warning of an invalid flag.

    If my reasoning is correct, then you should have simply created those
    partitions on */dev/sda* and */dev/sdb* as "Linux native" and "Linux swap"
    respectively, not as "Linux RAID". I believe the latter would be the type
    designation you get from */dev/md0* and */dev/md1.*

    Like I said, my experiences with software RAID are highly limited - I've
    never actually set it up myself, but I've worked on a machine that has a
    software RAID 1 - so if my comments about the partition types breaks your
    system, you get to keep both pieces. ;-)

    As a wise man once said to me, when all else fails, read the manual...

    Good luck! ;-)

    --
    Aragorn
    (registered GNU/Linux user #223157)

  3. Re: hard drive size on a RAID1 array

    On Apr 15, 11:11 am, Aragorn wrote:
    > Diego wrote:
    > > Hi all,

    >
    > > I have two 500 GB hard drives that are configured in a raid 1 array.
    > > They are partitioned into two: a 20gb partition and a 480gb partition
    > > (with 2gb of swap).

    >
    > That would be three partitions then? A swap partition is a partition too,
    > and on /x86/ it cannot be larger than ~2 GB; you can however use multiple
    > swap partitions and as such increase the amount of swap space available to
    > the kernel.
    >
    > > My problem is as follows:

    >
    > > When I do df -h I get the following:
    > > Filesystem Size Used Avail Use% Mounted on
    > > /dev/md0 19G 9.8G 7.8G 56% /
    > > varrun 1013M 128K 1013M 1% /var/run
    > > varlock 1013M 0 1013M 0% /var/lock
    > > udev 1013M 80K 1013M 1% /dev
    > > devshm 1013M 0 1013M 0% /dev/shm
    > > lrm 1013M 34M 979M 4% /lib/modules/2.6.22-14-
    > > generic/volatile

    >
    > > Where is my other partition??

    >
    > See above... And see further down...: a swap partition is not listed in
    > a /df/ output.
    >
    > > I tried using fdisk (but I don't really know how to use it that well)
    > > And I print the following:

    >
    > > Device Boot Start End Blocks Id System
    > > /dev/sdb1 * 1 60557 486424071 fd Linux raid
    > > autodetect
    > > /dev/sdb2 60558 60801 1959930 fd Linux raid
    > > autodetect

    >
    > > Device Boot Start End Blocks Id System
    > > /dev/sda1 * 1 60557 486424071 fd Linux raid
    > > autodetect
    > > /dev/sda2 60558 60801 1959930 fd Linux raid
    > > autodetect

    >
    > So you've created two partitions on each disk, not three... But read on...
    >
    > > Further information:

    >
    > > $ cat /etc/fstab
    > > # /etc/fstab: static file system information.
    > > #
    > > #
    > > proc /proc proc defaults 0 0
    > > # /dev/md0
    > > UUID=3130e988-88a8-4e87-8b85-7a267904a370 / ext3
    > > defaults,errors=remount-ro 0 1

    >
    > The above is your first metadevice, i.e. your first RAID 1 volume, so to
    > speak. You've mounted it on the root directory.
    >
    > > # /dev/md1
    > > UUID=8e35bf5c-6501-49fa-960e-ac3f589ffa1b none swap
    > > sw 0 0

    >
    > The above is your second metadevice, i.e. your second RAID 1 volume, which
    > you've set up as the swap partition.
    >
    > Mirroring a swap partition is not really helpful, in my humble opinion.
    > You'd been better off making a single swap partition on each drive at about
    > 1 GB in size, and using them with equal priority in */etc/fstab,* which
    > would effectively make them into a stripe.
    >
    > > /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0
    > > /dev/fd0 /media/floppy0 auto rw,user,noauto,exec 0 0

    >
    > > $ swapon -s
    > > Filename Type Size
    > > Used Priority
    > > /dev/md1 partition 1959800
    > > 34756 -1

    >
    > So far so good - at least, for your intended use.
    >
    > > $ free
    > > total used free shared buffers
    > > cached
    > > Mem: 2074372 1124088 950284 0 80548
    > > 909980
    > > -/+ buffers/cache: 133560 1940812
    > > Swap: 1959800 34756 1925044

    >
    > Your kernel is obviously paging out to the swap partition, so you know it's
    > being used.
    >
    > > [...]

    >
    >
    >
    >
    >
    > > $ du -h --max-depth=1 /
    > > 156M /lib
    > > 4.0K /srv
    > > 4.0K /opt
    > > 236K /dev
    > > 4.7M /bin
    > > 12K /media
    > > 0 /sys
    > > 40K /root
    > > 4.0K /initrd
    > > 0 /proc
    > > 7.9M /home
    > > 9.5M /etc
    > > 322M /var
    > > 2.1G /usr
    > > 18M /boot
    > > 16K /lost+found
    > > 6.4M /sbin
    > > 20K /tmp
    > > 4.0K /mnt
    > > 2.6G /

    >
    > The above only tells you how much space a directory takes up, not how much
    > is available.
    >
    >
    >
    > > $ cat /proc/mdstat
    > > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
    > > [raid4] [raid10]
    > > md1 : active raid1 sda2[0] sdb2[1]
    > > 1959808 blocks [2/2] [UU]

    >
    > > md0 : active raid1 sda1[0] sdb1[1]
    > > 486424000 blocks [2/2] [UU]

    >
    > > unused devices:

    >
    > > Very strange is the following:

    >
    > > # mount /dev/md0
    > > mount: /dev/md0 already mounted or / busy
    > > mount: according to mtab, /dev/md0 is already mounted on /

    >
    > Nothing strange about this. According to */etc/fstab,* that's your root
    > filesystem.
    >
    > > # mount /dev/md1
    > > mount: mount point none does not exist
    > > (must be swap?)

    >
    > You cannot mount a swap partition via the /mount/ command. It's error
    > message is as designed: you are telling it to mount a block device, so it
    > parses */etc/fstab* to see what mountpoint it must use, and all it finds is
    > the word "none". Therefore it looks for the directory "./none" - i.e. a
    > directory named "none" in the current working directory - which obviously
    > does not exist.
    >
    > I don't even know what would happen if that mountpoint *were* to actually
    > exist, so perhaps you're lucky that it halts its execution at the failure
    > of finding the mountpoint.
    >
    >
    >
    > > # fdisk /dev/md1
    > > p

    >
    > > Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    > > w(rite)

    >
    > > Command (m for help): p

    >
    > > Disk /dev/md1: 2006 MB, 2006843392 bytes
    > > 2 heads, 4 sectors/track, 489952 cylinders
    > > Units = cylinders of 8 * 512 = 4096 bytes
    > > Disk identifier: 0xd436a49e

    >
    > > Device Boot Start End Blocks Id System

    >
    > > # fdisk /dev/md0
    > > p
    > > Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    > > w(rite)

    >
    > > Command (m for help): p

    >
    > > Disk /dev/md0: 498.0 GB, 498098176000 bytes
    > > 2 heads, 4 sectors/track, 121606000 cylinders
    > > Units = cylinders of 8 * 512 = 4096 bytes
    > > Disk identifier: 0xfe9d38d6

    >
    > > Device Boot Start End Blocks Id System

    >
    > > So, according to this, / should be 498 GB..
    > > But it's only 20..

    >
    > > Any ideas?

    >
    > I have very little experience with Linux software RAID, but there are two
    > things I can think of that could have gone wrong.
    >
    > The first and simplest thing would be that you've created a filesystem of
    > only 20 GB in size when you should have created one that fills up your
    > entire partition. Just because your partition is 468 GB in size doesn't
    > mean that your filesystem is. The solution would be to resize the
    > filesystem using the appropriate tools and precautions - e.g. some
    > filesystems prefer being mounted when resized, others have to be unmounted
    > first, which in your case, the pertaining filesystem being the root
    > filesystem, you will need to do that from a Live CD.
    >
    > The second thing - and this is where I am only offering a suggestion as I
    > don't have the expertise and I may be wrong - would be that you possibly
    > designated the incorrect filesystem type when creating your partitions,
    > which could explain why the /fdisk/ report above doesn't show you any
    > statistics on */dev/md0* and why there is a warning of an invalid flag.
    >
    > If my reasoning is correct, then you should have simply created those
    > partitions on */dev/sda* and */dev/sdb* as "Linux native" and "Linux swap"
    > respectively, not as "Linux RAID". I believe the latter would be the type
    > designation you get from */dev/md0* and */dev/md1.*
    >
    > Like I said, my experiences with software RAID are highly limited - I've
    > never actually set it up myself, but I've worked on a machine that has a
    > software RAID 1 - so if my comments about the partition types breaks your
    > system, you get to keep both pieces. ;-)
    >
    > As a wise man once said to me, when all else fails, read the manual...
    >
    > Good luck! ;-)
    >
    > --
    > Aragorn
    > (registered GNU/Linux user #223157)


    How would you go about attempting to resize the filesystem whilst the
    partition is still mounted?
    The problem is that the server is in a different state -- i don't have
    physical access to it.. :S

  4. Re: hard drive size on a RAID1 array

    On Apr 15, 10:14 am, Diego wrote:
    > Hi all,
    >
    > I have two 500 GB hard drives that are configured in a raid 1 array.
    > They are partitioned into two: a 20gb partition and a 480gb partition
    > (with 2gb of swap).
    >
    > My problem is as follows:
    >
    > When I do df -h I get the following:
    > Filesystem Size Used Avail Use% Mounted on
    > /dev/md0 19G 9.8G 7.8G 56% /
    > varrun 1013M 128K 1013M 1% /var/run
    > varlock 1013M 0 1013M 0% /var/lock
    > udev 1013M 80K 1013M 1% /dev
    > devshm 1013M 0 1013M 0% /dev/shm
    > lrm 1013M 34M 979M 4% /lib/modules/2.6.22-14-
    > generic/volatile
    >
    > Where is my other partition??
    >
    > I tried using fdisk (but I don't really know how to use it that well)
    > And I print the following:
    >
    > Device Boot Start End Blocks Id System
    > /dev/sdb1 * 1 60557 486424071 fd Linux raid
    > autodetect
    > /dev/sdb2 60558 60801 1959930 fd Linux raid
    > autodetect
    >
    > Device Boot Start End Blocks Id System
    > /dev/sda1 * 1 60557 486424071 fd Linux raid
    > autodetect
    > /dev/sda2 60558 60801 1959930 fd Linux raid
    > autodetect
    >
    > Further information:
    >
    > $ cat /etc/fstab
    > # /etc/fstab: static file system information.
    > #
    > #
    > proc /proc proc defaults 0 0
    > # /dev/md0
    > UUID=3130e988-88a8-4e87-8b85-7a267904a370 / ext3
    > defaults,errors=remount-ro 0 1
    > # /dev/md1
    > UUID=8e35bf5c-6501-49fa-960e-ac3f589ffa1b none swap
    > sw 0 0
    > /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0
    > /dev/fd0 /media/floppy0 auto rw,user,noauto,exec 0 0
    >
    > $ swapon -s
    > Filename Type Size
    > Used Priority
    > /dev/md1 partition 1959800
    > 34756 -1
    >
    > $ free
    > total used free shared buffers
    > cached
    > Mem: 2074372 1124088 950284 0 80548
    > 909980
    > -/+ buffers/cache: 133560 1940812
    > Swap: 1959800 34756 1925044
    >
    > $ lsmod
    > Module Size Used by
    > af_packet 24840 2
    > vmnet 39092 13
    > vmmon 1825708 8
    > rfcomm 42136 2
    > l2cap 26240 11 rfcomm
    > bluetooth 57060 4 rfcomm,l2cap
    > ppdev 10244 0
    > ipv6 273892 14
    > acpi_cpufreq 10568 1
    > cpufreq_stats 7232 0
    > cpufreq_ondemand 9612 1
    > cpufreq_conservative 8072 0
    > freq_table 5792 3
    > acpi_cpufreq,cpufreq_stats,cpufreq_ondemand
    > cpufreq_userspace 5280 0
    > cpufreq_powersave 2688 0
    > sbs 19592 0
    > video 18060 0
    > ac 6148 0
    > button 8976 0
    > container 5504 0
    > dock 10656 0
    > battery 11012 0
    > lp 12580 0
    > loop 19076 0
    > psmouse 39952 0
    > parport_pc 37412 1
    > parport 37448 3 ppdev,lp,parport_pc
    > pcspkr 4224 0
    > shpchp 34580 0
    > pci_hotplug 32704 1 shpchp
    > evdev 11136 2
    > ext3 133896 1
    > jbd 60456 1 ext3
    > mbcache 9732 1 ext3
    > sg 36764 0
    > sr_mod 17828 1
    > cdrom 37536 1 sr_mod
    > sd_mod 30336 6
    > usbhid 29536 0
    > hid 28928 1 usbhid
    > ata_generic 8452 0
    > floppy 60004 0
    > ahci 23300 4
    > pata_it8213 9348 1
    > e1000 126272 0
    > libata 125168 3 ata_generic,ahci,pata_it8213
    > scsi_mod 147084 4 sg,sr_mod,sd_mod,libata
    > ehci_hcd 36492 0
    > uhci_hcd 26640 0
    > usbcore 138632 4 usbhid,ehci_hcd,uhci_hcd
    > raid10 26496 0
    > raid456 128016 0
    > xor 16904 1 raid456
    > raid1 25984 2
    > raid0 9728 0
    > multipath 9984 0
    > linear 7552 0
    > md_mod 82324 9
    > raid10,raid456,raid1,raid0,multipath,linear
    > thermal 14344 0
    > processor 32072 2 acpi_cpufreq,thermal
    > fan 5764 0
    > fuse 47124 1
    > apparmor 40728 0
    > commoncap 8320 1 apparmor
    >
    > $ du -h --max-depth=1 /
    > 156M /lib
    > 4.0K /srv
    > 4.0K /opt
    > 236K /dev
    > 4.7M /bin
    > 12K /media
    > 0 /sys
    > 40K /root
    > 4.0K /initrd
    > 0 /proc
    > 7.9M /home
    > 9.5M /etc
    > 322M /var
    > 2.1G /usr
    > 18M /boot
    > 16K /lost+found
    > 6.4M /sbin
    > 20K /tmp
    > 4.0K /mnt
    > 2.6G /
    >
    > $ cat /proc/mdstat
    > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
    > [raid4] [raid10]
    > md1 : active raid1 sda2[0] sdb2[1]
    > 1959808 blocks [2/2] [UU]
    >
    > md0 : active raid1 sda1[0] sdb1[1]
    > 486424000 blocks [2/2] [UU]
    >
    > unused devices:
    >
    > Very strange is the following:
    >
    > # mount /dev/md0
    > mount: /dev/md0 already mounted or / busy
    > mount: according to mtab, /dev/md0 is already mounted on /
    >
    > # mount /dev/md1
    > mount: mount point none does not exist
    > (must be swap?)
    >
    > # fdisk /dev/md1
    > p
    >
    > Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    > w(rite)
    >
    > Command (m for help): p
    >
    > Disk /dev/md1: 2006 MB, 2006843392 bytes
    > 2 heads, 4 sectors/track, 489952 cylinders
    > Units = cylinders of 8 * 512 = 4096 bytes
    > Disk identifier: 0xd436a49e
    >
    > Device Boot Start End Blocks Id System
    >
    > # fdisk /dev/md0
    > p
    > Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    > w(rite)
    >
    > Command (m for help): p
    >
    > Disk /dev/md0: 498.0 GB, 498098176000 bytes
    > 2 heads, 4 sectors/track, 121606000 cylinders
    > Units = cylinders of 8 * 512 = 4096 bytes
    > Disk identifier: 0xfe9d38d6
    >
    > Device Boot Start End Blocks Id System
    >
    > So, according to this, / should be 498 GB..
    > But it's only 20..
    >
    > Any ideas?


    Can I change the size of the raid array using mdadm?
    Does anyone have experience with this?

  5. Re: hard drive size on a RAID1 array

    On 2008-04-15 08:03, Diego wrote:

    >
    > Can I change the size of the raid array using mdadm?
    > Does anyone have experience with this?


    Did you really make a new fs after you made md0 ?
    It seems as it found some old fs with the size of 20G.

    Anyway, try wc -c /dev/md0 and see if your md0 is what you expect.
    Or maybe dd if=/dev/md0 of=/dev/null bs=1024k , and you get the numbers of
    megabytes of the md0 device.

    You can resize it online with resize2fs, but you need the system in single user
    maybe, since I guess it will be write locked during the resize.

    You can also goto partimage.org and download the rescue CD image, and use it
    to launch parted and other tools.

    Eg. start with cloning the system to an USB device with partimage so you can
    restore the system if you fail with some operation.

    /bb

  6. Re: hard drive size on a RAID1 array

    On Tue, 15 Apr 2008 09:00:07 UTC in comp.os.linux.hardware, birre
    wrote:

    > You can resize it online with resize2fs, but you need the system in single user
    > maybe, since I guess it will be write locked during the resize.


    resize2fs works for me on mounted filesystems on runlevel 3 systems but probably
    depends on distro/kernel/e2fsprogs versions. Mine was tested on Centos 5.1.

    --
    Trevor Hemsley, Brighton, UK
    Trevor dot Hemsley at ntlworld dot com

  7. Re: hard drive size on a RAID1 array

    Diego wrote:

    > On Apr 15, 11:11 am, Aragorn wrote:
    >
    >> [...]
    >> The first and simplest thing would be that you've created a filesystem of
    >> only 20 GB in size when you should have created one that fills up your
    >> entire partition. Just because your partition is 468 GB in size doesn't
    >> mean that your filesystem is. The solution would be to resize the
    >> filesystem using the appropriate tools and precautions - e.g. some
    >> filesystems prefer being mounted when resized, others have to be
    >> unmounted first, which in your case, the pertaining filesystem being the
    >> root filesystem, you will need to do that from a Live CD.
    >>
    >> [...]

    >
    > How would you go about attempting to resize the filesystem whilst the
    > partition is still mounted?


    This is only possible (and preferred) with the SGI-originated /XFS/
    filesystem, as far as I know. So a prerequisite would be that you do
    indeed have that filesystem, rather than the more prevalent /ext3/
    or /reiserfs/ filesystems, each of which needs to be unmounted before any
    attempts to resize them.

    You haven't mentioned what distribution you are running on that machine, but
    I do know from experience that RedHat, Fedora Core and CentOS do not allow
    you to choose anything other than /ext3/ for installation; other
    filesystems are recognized but are marked unavailable for installation, and
    cannot be created from within the /anaconda/ installer for other purposes
    either.

    The mainstream Linux kernel currently supports six UNIX-style filesystems in
    read/write mode, i.e. /ext2,/ /ext3,/ /ext4,/ /reiserfs,/ /XFS/ and /JFS./
    Most GNU/Linux distributions will normally default to using /ext3,/ and
    considering that you've decided to leave everything sitting on the root
    partition instead of splitting off a bunch of stuff, I can only presume
    that you stuck to the distribution's default choices.

    As for the "how", you do it with the filesystem-specific utilities, from the
    commandline. In your case however, with the pertaining filesystem housing
    your root directory and no alternative root partitions available, this is
    going to be a major problem.

    > The problem is that the server is in a different state -- i don't have
    > physical access to it.. :S


    Then I'm afraid you're going to have to get over there or have someone
    residing in that vicinity and with a sufficient level of knowledge carry
    out this task for you. :-/

    --
    Aragorn
    (registered GNU/Linux user #223157)

  8. Re: hard drive size on a RAID1 array

    Aragorn staggered into the Black Sun and said:
    > Diego wrote:
    >> On Apr 15, 11:11 am, Aragorn wrote:
    >>> The first and simplest thing would be that you've created a
    >>> filesystem of only 20 GB in size when you should have created one
    >>> that fills up your entire partition. Just because your partition is
    >>> 468 GB in size doesn't mean that your filesystem is.

    >> How would you go about attempting to resize the filesystem whilst the
    >> partition is still mounted?

    > This is only possible (and preferred) with the SGI-originated /XFS/
    > filesystem, as far as I know.


    ext2onlineresize has been in the vanilla kernel for a while now. This
    allows you to expand an ext23 filesystem with resize2fs. You can also
    expand a ReiserFS in much the same way. If you want to shrink either of
    those, you have to umount the FS first.

    > As for the "how", you do it with the filesystem-specific utilities,
    > from the commandline. In your case however, with the pertaining
    > filesystem housing your root directory and no alternative root
    > partitions available, this is going to be a major problem.


    Not if the kernel and utils are up to date. The kernel patch has been
    in the vanilla kernel since at least 2.6.22, and probably was there in a
    couple of minor revisions before that.

    --
    Due to inflation, your 40 acres and a mule have now been reduced to
    400 square feet and a guinea pig.
    My blog and resume: http://crow202.dyndns.org:8080/wordpress/
    Matt G|There is no Darkness in Eternity/But only Light too dim for us to see

  9. Re: hard drive size on a RAID1 array

    On Tue, 15 Apr 2008, Aragorn wrote:

    > A swap partition is a partition too, and on /x86/ it cannot be larger
    > than ~2 GB; you can however use multiple swap partitions and as such
    > increase the amount of swap space available to the kernel.


    It has been many years since this restriction was lifted (I don't know
    offhand which kernel version). I have many 32-bit boxes with swap
    partitions greater than 2GB.

    -s

  10. Re: hard drive size on a RAID1 array

    Steve Thompson wrote:

    > On Tue, 15 Apr 2008, Aragorn wrote:
    >
    >> A swap partition is a partition too, and on /x86/ it cannot be larger
    >> than ~2 GB; you can however use multiple swap partitions and as such
    >> increase the amount of swap space available to the kernel.

    >
    > It has been many years since this restriction was lifted (I don't know
    > offhand which kernel version). I have many 32-bit boxes with swap
    > partitions greater than 2GB.


    Sure, they can _be_ greater, but is that extra space also being *used?*

    Also, I do not believe this restriction was lifted at all yet as I follow
    kernel development relatively closely and I have not come across anything
    indicating what you're suggesting. Additionally, the documentation still
    says that ~2 GB is the maximum available size for both /x86-32/
    and /x86-64/ - other platforms may have other restrictions.

    If you are certain of this, then I would appreciate some reference to
    substantiate this claim and further educate me. ;-)

    It *is* however possible - as I believe to have written earlier - to use
    more than one swap partition of ~2 GB in size, and as such you would be
    multiplying the available amount of swap by simply adding a new swap
    partition to your existing set-up. By setting a priority for the swap
    partitions in */etc/fstab* you could even have swap partitions on two or
    more separate disks and have them act as a RAID 0 swap array.

    --
    Aragorn
    (registered GNU/Linux user #223157)

  11. Re: hard drive size on a RAID1 array

    On Tue, 15 Apr 2008 22:40:28 UTC in comp.os.linux.hardware, Aragorn
    wrote:

    > If you are certain of this, then I would appreciate some reference to
    > substantiate this claim and further educate me. ;-)


    man mkswap
    ....
    The maximum useful size of a swap area depends on the architecture and the kernel version. It is roughly 2GiB
    on i386, PPC, m68k, ARM, 1GiB on sparc, 512MiB on mips, 128GiB on alpha and 3TiB on sparc64. For kernels after
    2.3.3 there is no such limitation.
    ....

    Notice the 'For kernels after' line.

    --
    Trevor Hemsley, Brighton, UK
    Trevor dot Hemsley at ntlworld dot com


  12. Re: hard drive size on a RAID1 array

    Trevor Hemsley wrote:

    > On Tue, 15 Apr 2008 22:40:28 UTC in comp.os.linux.hardware, Aragorn
    > wrote:
    >
    >> If you are certain of this, then I would appreciate some reference to
    >> substantiate this claim and further educate me. ;-)

    >
    > man mkswap
    > ...
    > The maximum useful size of a swap area depends on the architecture
    > and the kernel version. It is roughly 2GiB
    > on i386, PPC, m68k, ARM, 1GiB on sparc, 512MiB on mips, 128GiB on
    > alpha and 3TiB on sparc64. For kernels after 2.3.3 there is no such
    > limitation.
    > ...
    >
    > Notice the 'For kernels after' line.


    I'll take your word for it - or the word of your /mkswap/ /man/ page. ;-)

    On this machine here, running an old Mandrake 10.0 with a 2.6.3 kernel,
    although the "advised" kernel for this distro was 2.4.22 - which does
    indeed come after 2.3.3 - the /man/ page for /mkswap/ has no such line; the
    quoted paragraph simply ends with the period before the "For kernels
    after..." line.

    MandrakeSoft's/Mandriva's mistake then...

    --
    Aragorn
    (registered GNU/Linux user #223157)

  13. Re: hard drive size on a RAID1 array

    On Apr 15, 10:14 am, Diego wrote:
    > Hi all,
    >
    > I have two 500 GB hard drives that are configured in a raid 1 array.
    > They are partitioned into two: a 20gb partition and a 480gb partition
    > (with 2gb of swap).
    >
    > My problem is as follows:
    >
    > When I do df -h I get the following:
    > Filesystem Size Used Avail Use% Mounted on
    > /dev/md0 19G 9.8G 7.8G 56% /
    > varrun 1013M 128K 1013M 1% /var/run
    > varlock 1013M 0 1013M 0% /var/lock
    > udev 1013M 80K 1013M 1% /dev
    > devshm 1013M 0 1013M 0% /dev/shm
    > lrm 1013M 34M 979M 4% /lib/modules/2.6.22-14-
    > generic/volatile
    >
    > Where is my other partition??
    >
    > I tried using fdisk (but I don't really know how to use it that well)
    > And I print the following:
    >
    > Device Boot Start End Blocks Id System
    > /dev/sdb1 * 1 60557 486424071 fd Linux raid
    > autodetect
    > /dev/sdb2 60558 60801 1959930 fd Linux raid
    > autodetect
    >
    > Device Boot Start End Blocks Id System
    > /dev/sda1 * 1 60557 486424071 fd Linux raid
    > autodetect
    > /dev/sda2 60558 60801 1959930 fd Linux raid
    > autodetect
    >
    > Further information:
    >
    > $ cat /etc/fstab
    > # /etc/fstab: static file system information.
    > #
    > #
    > proc /proc proc defaults 0 0
    > # /dev/md0
    > UUID=3130e988-88a8-4e87-8b85-7a267904a370 / ext3
    > defaults,errors=remount-ro 0 1
    > # /dev/md1
    > UUID=8e35bf5c-6501-49fa-960e-ac3f589ffa1b none swap
    > sw 0 0
    > /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0
    > /dev/fd0 /media/floppy0 auto rw,user,noauto,exec 0 0
    >
    > $ swapon -s
    > Filename Type Size
    > Used Priority
    > /dev/md1 partition 1959800
    > 34756 -1
    >
    > $ free
    > total used free shared buffers
    > cached
    > Mem: 2074372 1124088 950284 0 80548
    > 909980
    > -/+ buffers/cache: 133560 1940812
    > Swap: 1959800 34756 1925044
    >
    > $ lsmod
    > Module Size Used by
    > af_packet 24840 2
    > vmnet 39092 13
    > vmmon 1825708 8
    > rfcomm 42136 2
    > l2cap 26240 11 rfcomm
    > bluetooth 57060 4 rfcomm,l2cap
    > ppdev 10244 0
    > ipv6 273892 14
    > acpi_cpufreq 10568 1
    > cpufreq_stats 7232 0
    > cpufreq_ondemand 9612 1
    > cpufreq_conservative 8072 0
    > freq_table 5792 3
    > acpi_cpufreq,cpufreq_stats,cpufreq_ondemand
    > cpufreq_userspace 5280 0
    > cpufreq_powersave 2688 0
    > sbs 19592 0
    > video 18060 0
    > ac 6148 0
    > button 8976 0
    > container 5504 0
    > dock 10656 0
    > battery 11012 0
    > lp 12580 0
    > loop 19076 0
    > psmouse 39952 0
    > parport_pc 37412 1
    > parport 37448 3 ppdev,lp,parport_pc
    > pcspkr 4224 0
    > shpchp 34580 0
    > pci_hotplug 32704 1 shpchp
    > evdev 11136 2
    > ext3 133896 1
    > jbd 60456 1 ext3
    > mbcache 9732 1 ext3
    > sg 36764 0
    > sr_mod 17828 1
    > cdrom 37536 1 sr_mod
    > sd_mod 30336 6
    > usbhid 29536 0
    > hid 28928 1 usbhid
    > ata_generic 8452 0
    > floppy 60004 0
    > ahci 23300 4
    > pata_it8213 9348 1
    > e1000 126272 0
    > libata 125168 3 ata_generic,ahci,pata_it8213
    > scsi_mod 147084 4 sg,sr_mod,sd_mod,libata
    > ehci_hcd 36492 0
    > uhci_hcd 26640 0
    > usbcore 138632 4 usbhid,ehci_hcd,uhci_hcd
    > raid10 26496 0
    > raid456 128016 0
    > xor 16904 1 raid456
    > raid1 25984 2
    > raid0 9728 0
    > multipath 9984 0
    > linear 7552 0
    > md_mod 82324 9
    > raid10,raid456,raid1,raid0,multipath,linear
    > thermal 14344 0
    > processor 32072 2 acpi_cpufreq,thermal
    > fan 5764 0
    > fuse 47124 1
    > apparmor 40728 0
    > commoncap 8320 1 apparmor
    >
    > $ du -h --max-depth=1 /
    > 156M /lib
    > 4.0K /srv
    > 4.0K /opt
    > 236K /dev
    > 4.7M /bin
    > 12K /media
    > 0 /sys
    > 40K /root
    > 4.0K /initrd
    > 0 /proc
    > 7.9M /home
    > 9.5M /etc
    > 322M /var
    > 2.1G /usr
    > 18M /boot
    > 16K /lost+found
    > 6.4M /sbin
    > 20K /tmp
    > 4.0K /mnt
    > 2.6G /
    >
    > $ cat /proc/mdstat
    > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
    > [raid4] [raid10]
    > md1 : active raid1 sda2[0] sdb2[1]
    > 1959808 blocks [2/2] [UU]
    >
    > md0 : active raid1 sda1[0] sdb1[1]
    > 486424000 blocks [2/2] [UU]
    >
    > unused devices:
    >
    > Very strange is the following:
    >
    > # mount /dev/md0
    > mount: /dev/md0 already mounted or / busy
    > mount: according to mtab, /dev/md0 is already mounted on /
    >
    > # mount /dev/md1
    > mount: mount point none does not exist
    > (must be swap?)
    >
    > # fdisk /dev/md1
    > p
    >
    > Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    > w(rite)
    >
    > Command (m for help): p
    >
    > Disk /dev/md1: 2006 MB, 2006843392 bytes
    > 2 heads, 4 sectors/track, 489952 cylinders
    > Units = cylinders of 8 * 512 = 4096 bytes
    > Disk identifier: 0xd436a49e
    >
    > Device Boot Start End Blocks Id System
    >
    > # fdisk /dev/md0
    > p
    > Warning: invalid flag 0x0000 of partition table 4 will be corrected by
    > w(rite)
    >
    > Command (m for help): p
    >
    > Disk /dev/md0: 498.0 GB, 498098176000 bytes
    > 2 heads, 4 sectors/track, 121606000 cylinders
    > Units = cylinders of 8 * 512 = 4096 bytes
    > Disk identifier: 0xfe9d38d6
    >
    > Device Boot Start End Blocks Id System
    >
    > So, according to this, / should be 498 GB..
    > But it's only 20..
    >
    > Any ideas?


    Hi all,

    First of all, thanks to all who contributed to this post.. Your help
    is very much appreciated.

    To my problem: I managed to get it working!

    I thought .. 'what the hell.. if i break something I'll have to get
    the machine back anyway ..'

    So I did the following:
    # resize2fs -f /dev/md0 400000M

    And, it took a little while, but it worked!

    The machine re-started, I'm copying files across, it's all working as
    normal..

    So again, thanks for all who helped and contributed..

    BTW..
    It's an Ubuntu Server install.

    Regards,

    I.

  14. Re: hard drive size on a RAID1 array

    On Wed, 16 Apr 2008 00:12:47 UTC in comp.os.linux.hardware, Aragorn
    wrote:

    > I'll take your word for it - or the word of your /mkswap/ /man/ page. ;-)


    Is this any more convincing?

    [trevor@trevors ~]$ /sbin/swapon -s
    Filename Type Size Used Priority
    /dev/mapper/VolGroupTH-LogVolSwap partition 4194296 920 -1

    --
    Trevor Hemsley, Brighton, UK
    Trevor dot Hemsley at ntlworld dot com

  15. Re: hard drive size on a RAID1 array



    On Mon, 14 Apr 2008, Diego wrote:

    > # /dev/md1
    > UUID=8e35bf5c-6501-49fa-960e-ac3f589ffa1b none swap


    SNIP

    >
    > $ cat /proc/mdstat
    > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
    > [raid4] [raid10]
    > md1 : active raid1 sda2[0] sdb2[1]
    > 1959808 blocks [2/2] [UU]


    Pardon me if this has already been mentioned, but using a RAID partiton
    for swap is a really bad idea -- it will slow down the machine as the data
    has to be written twice (RAID1). Even using RAID0 for swap is pointless,
    since the kernel will do the same if one merely gives it two swap
    partitions with equal priority.

  16. Re: hard drive size on a RAID1 array

    On 2008-04-16 05:07, Whoever wrote:
    >
    >
    > On Mon, 14 Apr 2008, Diego wrote:
    >
    >> # /dev/md1
    >> UUID=8e35bf5c-6501-49fa-960e-ac3f589ffa1b none swap

    >
    > SNIP
    >
    >>
    >> $ cat /proc/mdstat
    >> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
    >> [raid4] [raid10]
    >> md1 : active raid1 sda2[0] sdb2[1]
    >> 1959808 blocks [2/2] [UU]

    >
    > Pardon me if this has already been mentioned, but using a RAID partiton
    > for swap is a really bad idea -- it will slow down the machine as the
    > data has to be written twice (RAID1). Even using RAID0 for swap is
    > pointless, since the kernel will do the same if one merely gives it two
    > swap partitions with equal priority.


    Yes, but there is one advantage to it, the machine or applications
    don't crash if he get a bad block on the swap device.

    /bb

  17. Re: hard drive size on a RAID1 array

    On 2008-04-16 02:39, Diego wrote:

    > So I did the following:
    > # resize2fs -f /dev/md0 400000M
    >
    > And, it took a little while, but it worked!


    When I was reading the manual for resize2fs it said that
    the available space will be used if you don't tell the size.

    Did you test without 400000M ?

    The danger here is to make a filesystem that is bigger then the slice,
    so you get a surprise later when your disk is full of data.

    But as I see it now, you are safe, but maybe too safe, leaving 73GB unallocated.

    > bc

    scale=10
    400000*1024*1024
    419430400000
    498098176000-.
    78667776000
    ../1024^3
    73.2650756835

    The good old pocket calculator :-)

    /bb

  18. Re: hard drive size on a RAID1 array

    Trevor Hemsley wrote:

    > On Wed, 16 Apr 2008 00:12:47 UTC in comp.os.linux.hardware, Aragorn
    > wrote:
    >
    >> I'll take your word for it - or the word of your /mkswap/ /man/ page. ;-)

    >
    > Is this any more convincing?
    >
    > [trevor@trevors ~]$ /sbin/swapon -s
    > Filename Type Size Used
    > Priority
    > /dev/mapper/VolGroupTH-LogVolSwap partition 4194296 920 -1


    Oh, but I was convinced already the first time around! :-)

    It's just that Mandrake/Mandriva was/is obviously distributing
    outdated /man/ pages then, which is quite strange as they're pretty cutting
    edge in terms of everything else. ;-)

    --
    Aragorn
    (registered GNU/Linux user #223157)

  19. Re: hard drive size on a RAID1 array

    On Apr 16, 6:47*pm, birre wrote:
    > On 2008-04-16 02:39, Diego wrote:
    >
    > > So I did the following:
    > > # resize2fs -f /dev/md0 400000M

    >
    > > And, it took a little while, but it worked!

    >
    > When I was reading the manual for resize2fs it said that
    > the available space will be used if you don't tell the size.
    >
    > Did you test without 400000M ?
    >
    > The danger here is to make a filesystem that is bigger then the slice,
    > so you get a surprise later when your disk is full of data.
    >
    > But as I see it now, you are safe, but maybe too safe, leaving 73GB unallocated.
    >
    > *> bc
    > scale=10
    > 400000*1024*1024
    > 419430400000
    > 498098176000-.
    > 78667776000
    > ./1024^3
    > 73.2650756835
    >
    > The good old pocket calculator :-)
    >
    > /bb


    Hi there..
    I wanted to see if it worked before anything -- So I chose 400000mb..
    When I realised it worked, I decided that.. Having unallocated space
    never hurt before.. Granted, 70gb is a bit much, but .. I doubt it'll
    ever get to the point when I'll need it..

+ Reply to Thread