--=-7QcQ3DHNQwz0JTOrLEgy
Content-Type: text/plain
Content-Transfer-Encoding: quoted-printable

On Sat, 2008-04-12 at 17:38 -0400, David Schultz wrote:
> On Sat, Apr 12, 2008, Coleman Kane wrote:
> > On Sat, 2008-04-12 at 15:55 -0400, David Schultz wrote:
> > > On Sat, Apr 12, 2008, Joe Marcus Clarke wrote:
> > > > On Sat, 2008-04-12 at 15:09 -0400, Coleman Kane wrote:
> > > > > Hello,
> > > > >=20
> > > > > Recently we've been having a discussion on the GNOME list about f=

ixing
> > > > > the seahorse breakage introduced with the latest GNOME 2.22, root=

ed in
> > > > > the fact that FreeBSD's mlock(2) implementation is only usable if=

you
> > > > > have superuser privileges. Due to bugs in seahorse, the lack of m=

lock(2)
> > > > > causes many seahorse applications to die. I've posted a suggested=

patch
> > > > > to=20
> > > [...]
> > > > > As a third idea, we could leave the per-process limit (to abide b=

y
> > > > > historical documentation), but also add a sysctl that enforces a
> > > > > system-wide "max mlock pages" which can be tested by the mlock(2)
> > > > > syscall, refusing to mlock(2) more memory if the limit is hit.
> > > >=20
> > > > I think this already exists in -CURRENT: vm.max_wired ("System-wide
> > > > limit to wired page count"). This is tested by mlock(2) in additio=

n to
> > > > RLIMIT_MEMLOCK.
> > >=20
> > > First of all, many other operating systems such as Solaris also
> > > restrict mlock(2) to the superuser, so this is a bug in seahorse.
> > >=20
> > > That said, it seems like allowing ordinary users to mlock(2) small
> > > amounts of memory (e.g., vm_page_max_wired / 4 across all
> > > non-superuser processes by default) would fix your problem and be
> > > easy to implement. Of course, per-user or per-process limits
> > > would be more flexible, but how many people really have lots of
> > > users who are trying to abuse the system?
> > >=20

> >=20
> > I did some math and came up with the following per-user limit on my
> > system. Using the default install, my maxproc is set to 5547:
> > max_secure_mem =3D max_proc * memorylocked =3D 5547 * 16384 =3D 9088=

2048 =3D
> > about 87MB
> >=20
> > So, under my operating conditions (2GB System RAM), a user's maximum Do=

S
> > attempt would be capped at 87MB... which doesn't seem as serious
> > anymore. This is using the 16K memorylocked value that gnome-keyring &
> > friends seem to work fine with.

>=20
> Yes, but why bother? A system-wide limit would still be far easier
> to implement than keeping track of a per-process limit, and it
> allows processes to lock more memory on single-user systems, while
> keeping the overall limit low (since most processes don't lock
> anything).
>=20
> There are additional difficulties with per-process limits such as
> deciding who to charge when multiple processes lock a shared
> memory segment. If you charge one or the other, e.g., processes A
> and B share a locked segment and you charge A, then B could exceed
> its limit by locking another segment, then having A munlock(). If
> you charge both processes, then mmap() on a segment that another
> process has locked might fail.
>=20


Hey guys,

Would arch@ be a more suitable place to discuss this / probe for more
info?

--=20
Coleman Kane


--=-7QcQ3DHNQwz0JTOrLEgy
Content-Type: application/pgp-signature; name=signature.asc
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (FreeBSD)

iEYEABECAAYFAkgECQMACgkQcMSxQcXat5dSmwCfYvl2cMUP4r r8q/fBjKuwIjCv
AfkAn17pkyNMiYWP7Hs8Z9v8z9zBfleC
=Cy6N
-----END PGP SIGNATURE-----

--=-7QcQ3DHNQwz0JTOrLEgy--