On Fri, 09 May 2008 19:06:33 +0200, Joerg Sonnenberger

> On Fri, May 09, 2008 at 06:50:10PM +0200, Anders Nore wrote:
>> Yes that would probably be bad for the database, but I'm sure one can
>> manage to get around this problem by copying it before changing the db
>> and
>> delete the copy if it doesn't fail. At the next time executed it will
>> check
>> for a copy, use that and assume that the last run was unsuccessful.

> /var/db/pkg contains 10MB for the various packages installed on my
> laptop and 10MB for the cache of +CONTENTS. You don't want to copy that
> around all the time.
>>> Secondly, I would also advisy against just storing all meta data in a
>>> single key/value pair. For example, +CONTENTS can be extremely large.
>>> Check texmf for a good example.

>> When it comes to storing large values in a key/value pair, I think
>> that's
>> what bdb was designed for, handling large amounts of data (in the
>> orders of
>> gigabytes even in key's) fast.

> No, actually that is exactly what it was *not* designed for. Having
> billions of keys is fine, but data that exceeds the size of a database
> page is going to slow down. While it might not be a problem of you are
> copying the data to a new file anyway, it also means that fragmentation
> in the database will be more problematic.
> My main point is that for the interesting operations you want to
> actually look up with fine grained keys and that's what is not possible
> if you store the meta data as blob. In fact, storing the meta data as
> blob is not faster than just using the filesystem.

You are probably right, but how would you store the key's? Is storing the
key as e.g., 'portname-1.2_3+CONTENT' a good solution?
freebsd-hackers@freebsd.org mailing list
To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"