This is a discussion on terabyte limit problem with vinum/ccd - FreeBSD ; I've tried freebsd-questions but didnt find much help there, so now I'm trying here. I have around 10 IDE drives which add up to over a terabyte. My goal is to use them all as one big drive using any ...
I've tried freebsd-questions but didnt find much help there, so now I'm
I have around 10 IDE drives which add up to over a terabyte. My goal
is to use them all as one big drive using any means necessary (I have a
backup so redundency is not needed, only space in this situation)
I used to use vinum (and still would like to), i hit the terabyte limit
with UFS and was told i would have to upgrade to 5.1 in order to take
advantage of UFS2 and > 1TB filesystem -- so thats what i've done.
However I still seem to have the exactly same problems. I'm now trying
it on a whole new box and set of drives with the same set of problems.
It doesn't matter if i use vinum or ccdconfig -- they all work fine and
predictably, until I make it larger than a terabyte then i get the
following on freebsd 5.1 RELEASE:
# ccdconfig -cv ccd0 16 none /dev/ad1s1e .. /dev/ad10s1e
ccd0: 10 componets (ad1s1e, .., ad10s1e), 2223956864 blocks interleaved at 16 blocks
# newfs /dev/ccd0
newfs: wtfs: 512 bytes at sector 2223956863: Invalid argument
# newfs /dev/vinum/vinum0
/dev/vinum/vinum0: 1085915.5MB (2223954992 sectors) block size 16384, fragment size 2048
using 5910 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
newfs: can't read old UFS1 superblock: read error from block device: Invalid arguement
If i remove any single drive in the configuration making it < 1TB it
works fine, i can newfs and what not. Add it back up to >1TB and newfs
It seems to break when it calls wtfs, which just called bwrite, which
seems to just be a libufs front end for calling pwrite. Unfortunately at
this point i'm at a loss exactly what the values should be and why its
failing still. (newfs wrong? vinum backend?)
I've heard vinum has no practical limits, and should work fine beyond a
terabyte, and i've spoken directly with people who have newfs'd over a
terabyte using hardware raid (3ware) -- so I'm just baffled. Am I doing
something wrong or is this configuration just simply not supported yet?
It may be intestersting to note if I do new newfs -N for testing i get no
errors and the output looks great, mainly because it never calls bwrite.
Any suggestions or confirmation on weather or not this should work or
not or alternative ideas? 12channel IDE raid controller is about $700
more than i got right now
Thanks for any help -- been driving me nuts for a while now.
email@example.com mailing list
To unsubscribe, send any mail to "firstname.lastname@example.org"