Can't see 10TB FS!? - BSD

This is a discussion on Can't see 10TB FS!? - BSD ; I have a Coraid SAN with 15 750G drives in a raid5 + hot spare. The total FS size is being reported by the Coraid device as 9752.032GB (just under 10TB). This SAN (using the aoe module) has been shared ...

+ Reply to Thread
Results 1 to 6 of 6

Thread: Can't see 10TB FS!?

  1. Can't see 10TB FS!?

    I have a Coraid SAN with 15 750G drives in a raid5 + hot spare. The
    total FS size is being reported by the Coraid device as 9752.032GB (just
    under 10TB). This SAN (using the aoe module) has been shared between 2
    FreeBSD 6.2 and 6.3 boxes without issue as separate filesystems, but I
    now want it to present one filesystem. The OS is only reporting 911GB!
    Does anyone know if FreeBSD really can see a single filesystem this
    large? The 2 machines previously had 2-3TB filesystems carved out of
    this SAN. It appears that its just confused about the decimal place.
    Any help/insight would be appreciated.

    From uname -a:
    FreeBSD 6.3-RELEASE-p3 FreeBSD 6.3-RELEASE-p3 #0: Wed Jul 16
    18:59:27 CDT 2008 root@:/usr/obj/usr/src/sys/bpc i386

    From bsdlabel:
    # /dev/aoed0:
    8 partitions:
    # size offset fstype [fsize bsize bps/cpg]
    a: 1867068525 16 unused 0 0
    c: 1867068541 0 unused 0 0 # "raw" part,
    don't edit

    From fdisk:
    # fdisk -I aoed0
    ******* Working on device /dev/aoed0 *******
    fdisk: invalid fdisk partition table found
    fdisk: Geom not found
    # fdisk aoed0
    ******* Working on device /dev/aoed0 *******
    parameters extracted from in-core disklabel are:
    cylinders=116219 heads=255 sectors/track=63 (16065 blks/cyl)

    Figures below won't work with BIOS for partitions not in cyl 1
    parameters to be used for BIOS calculations are:
    cylinders=116219 heads=255 sectors/track=63 (16065 blks/cyl)

    Media sector size is 512
    Warning: BIOS sector numbering starts with sector 1
    Information from DOS bootblock is:
    The data for partition 1 is:
    sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD)
    start 63, size 1867058172 (911649 Meg), flag 80 (active)
    beg: cyl 0/ head 1/ sector 1;
    end: cyl 506/ head 254/ sector 63
    The data for partition 2 is:

    The data for partition 3 is:

    The data for partition 4 is:



  2. Re: Can't see 10TB FS!?

    Ok.. So its being a bit more retarded than I thought. I created 7
    raid0 arrays in the Coraid that are made up of 2 drives each. When I run
    fdisk on any of them alone I get 1.4gb file systems. Great... Now lets
    just stripe the damn things... no luck 1627051 Meg!?!?

    # gvinum l
    7 drives:
    D g State: up /dev/aoed6 A: 0/1430809 MB (0%)
    D f State: up /dev/aoed5 A: 0/1430809 MB (0%)
    D e State: up /dev/aoed4 A: 0/1430809 MB (0%)
    D d State: up /dev/aoed3 A: 0/1430809 MB (0%)
    D c State: up /dev/aoed2 A: 0/1430809 MB (0%)
    D b State: up /dev/aoed1 A: 0/1430809 MB (0%)
    D a State: up /dev/aoed0 A: 0/1430809 MB (0%)

    1 volume:
    V raid0 State: up Plexes: 1 Size: 9780 GB

    1 plex:
    P raid0.p0 S State: up Subdisks: 7 Size: 9780 GB

    7 subdisks:
    S raid0.p0.s6 State: up D: g Size: 1397 GB
    S raid0.p0.s5 State: up D: f Size: 1397 GB
    S raid0.p0.s4 State: up D: e Size: 1397 GB
    S raid0.p0.s3 State: up D: d Size: 1397 GB
    S raid0.p0.s2 State: up D: c Size: 1397 GB
    S raid0.p0.s1 State: up D: b Size: 1397 GB
    S raid0.p0.s0 State: up D: a Size: 1397 GB

    # fdisk -I /dev/gvinum/raid0
    ******* Working on device /dev/gvinum/raid0 *******
    fdisk: Geom not found

    # fdisk /dev/gvinum/raid0
    ******* Working on device /dev/gvinum/raid0 *******
    parameters extracted from in-core disklabel are:
    cylinders=1276817 heads=255 sectors/track=63 (16065 blks/cyl)

    Figures below won't work with BIOS for partitions not in cyl 1
    parameters to be used for BIOS calculations are:
    cylinders=1276817 heads=255 sectors/track=63 (16065 blks/cyl)

    Media sector size is 512
    Warning: BIOS sector numbering starts with sector 1
    Information from DOS bootblock is:
    The data for partition 1 is:
    sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD)
    start 63, size 3332202237 (1627051 Meg), flag 80 (active)
    beg: cyl 0/ head 1/ sector 1;
    end: cyl 571/ head 254/ sector 63
    The data for partition 2 is:

    The data for partition 3 is:

    The data for partition 4 is:


  3. Re: Can't see 10TB FS!?

    James wrote:

    > Ok.. So its being a bit more retarded than I thought. I created 7
    > raid0 arrays in the Coraid that are made up of 2 drives each. When I run
    > fdisk on any of them alone I get 1.4gb file systems. Great... Now lets
    > just stripe the damn things... no luck 1627051 Meg!?!?

    [snip]

    I don't know much about this subject and would like to learn more, but you
    may consider to investigate using GPT. While I don't think the loader can
    boot from GPT, I do recall reading something somewhere about
    some "compatibility" capability that would allow you to have both; ie
    the "normal" disklabel for booting the / and the OS, and the rest labeled
    as GPT.

    I also don't know very much about the GEOM subsystem, but IIRC there is
    a "glabel" command that may be useful in the >2TB space.

    Another idea may be ZFS. I still consider it to be somewhat "expiremental"
    on FreeBSD, but there are reports of people using it. I've only seen it so
    far in conjunction with Solaris.

    The manpage for gpt isn't extremely enlightening but may serve as a starting
    point. Perhaps you can find some more info from people who have used it
    successfully. Just tossing out some ideas here...

    -Mike



  4. Re: Can't see 10TB FS!?

    Jason Bourne schrieb:
    > James wrote:
    >
    >> Ok.. So its being a bit more retarded than I thought. I created 7
    >> raid0 arrays in the Coraid that are made up of 2 drives each. When I run
    >> fdisk on any of them alone I get 1.4gb file systems. Great... Now lets
    >> just stripe the damn things... no luck 1627051 Meg!?!?

    > [snip]
    >
    > I don't know much about this subject and would like to learn more, but you
    > may consider to investigate using GPT.


    Anything > 999 GB must use GPT, IIRC.

    > Another idea may be ZFS. I still consider it to be somewhat "expiremental"
    > on FreeBSD, but there are reports of people using it. I've only seen it so
    > far in conjunction with Solaris.


    The ZFS in HEAD may be better suited.
    Or use Solaris/OpenSolaris and NFS export that.
    The solarisinternals wiki didn't mention any memory-requirements
    anymore, last time I looked - but previously, you were advised to have 1
    GB per TB storage.
    But: ZFS only works good if it can have full control over the individual
    disk.
    Giving it a 10 TB volume is kind-of-pointless.


    > The manpage for gpt isn't extremely enlightening but may serve as a starting
    > point. Perhaps you can find some more info from people who have used it
    > successfully. Just tossing out some ideas here...


    freebsd-fs or freebsd-questions...


    Rainer

  5. Re: Can't see 10TB FS!?

    Rainer Duffner wrote:
    > The ZFS in HEAD may be better suited.
    > Or use Solaris/OpenSolaris and NFS export that.
    > The solarisinternals wiki didn't mention any memory-requirements
    > anymore, last time I looked - but previously, you were advised to have 1
    > GB per TB storage.
    > But: ZFS only works good if it can have full control over the individual
    > disk.
    > Giving it a 10 TB volume is kind-of-pointless.


    Thanks to both of you for your replies. After spending entirely too
    much time on this project we installed CentOS and are now using GFS
    without issue [knocks on wood]. We /should/ also be able to share the
    filesystem with another backup box should the load get too great for
    this one machine.
    FreeBSD is always my first choice, but sometimes it just doesn't do what
    I need

    -James

  6. Re: Can't see 10TB FS!?

    James schrieb:
    > Rainer Duffner wrote:
    >> The ZFS in HEAD may be better suited.
    >> Or use Solaris/OpenSolaris and NFS export that.
    >> The solarisinternals wiki didn't mention any memory-requirements
    >> anymore, last time I looked - but previously, you were advised to have
    >> 1 GB per TB storage.
    >> But: ZFS only works good if it can have full control over the
    >> individual disk.
    >> Giving it a 10 TB volume is kind-of-pointless.

    >
    > Thanks to both of you for your replies. After spending entirely too
    > much time on this project we installed CentOS and are now using GFS


    I hope it works for you.
    We think, it's way to complicated and there are really only a handful of
    experts who can fix problems. And because it's not very widely used, you
    may be the first one to encounter them in that specific use-case....
    Have you tried fsck'ing that 10 TB filesystem (when it's decently
    filled...)?

    > without issue [knocks on wood]. We /should/ also be able to share the
    > filesystem with another backup box should the load get too great for
    > this one machine.
    > FreeBSD is always my first choice, but sometimes it just doesn't do what
    > I need


    Yep. That's why we use Solaris when we can/must (and RHEL/CentOS and
    Ubuntu in Virtuozzo, when we absolutely have to...)



    Rainer

+ Reply to Thread