On Tue, 28 Oct 2008, Peter Zijlstra wrote:

> > This patch implements the management of dirty node maps for an address
> > space through the following functions:
> >
> > cpuset_clear_dirty_nodes(mapping) Clear the map of dirty nodes
> >
> > cpuset_update_nodes(mapping, page) Record a node in the dirty nodes
> > map
> >
> > cpuset_init_dirty_nodes(mapping) Initialization of the map
> >
> >
> > The dirty map may be stored either directly in the mapping (for NUMA
> > systems with less then BITS_PER_LONG nodes) or separately allocated for
> > systems with a large number of nodes (f.e. ia64 with 1024 nodes).
> >
> > Updating the dirty map may involve allocating it first for large
> > configurations. Therefore, we protect the allocation and setting of a
> > node in the map through the tree_lock. The tree_lock is already taken
> > when a page is dirtied so there is no additional locking overhead if we
> > insert the updating of the nodemask there.

>
> I find this usage of tree lock most bothersome, as my concurrent
> pagecache patches take the lock out. In which case this _does_ cause
> extra locking overhead.
>


Yeah, if we don't serialize with tree_lock then we'll need to protect the
attachment of mapping->dirty_nodes with a new spinlock in struct
address_space (and only for configs where MAX_NUMNODES > BITS_PER_LONG).
That locking overhead is negligible when mapping->dirty_nodes is non-NULL
since there's no requirement to protect the setting of the node in the
nodemask.

Are your concurrent pagecache patches in the latest mmotm? If so, I can
rebase this entire patchset off that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/